More stories

  • in

    Breakthrough in quantum photonics promises a new era in optical circuits

    The modern world is powered by electrical circuitry on a “chip” — the semiconductor chip underpinning computers, cell phones, the internet, and other applications. In the year 2025, humans are expected to be creating 175 zettabytes (175trillion gigabytes) of new data. How can we ensure the security of sensitive data at such a high volume? And how can we address grand-challenge-like problems, from privacy and security to climate change, leveraging this data, especially given the limited capability of current computers?
    A promising alternative is emerging quantum communication and computation technologies. For this to happen, however, it will require the widespread development of powerful new quantum optical circuits; circuits that are capable of securely processing the massive amounts of information we generate every day. Researchers in USC’s Mork Family Department of Chemical Engineering and Materials Science have made a breakthrough to help enable this technology.
    While a traditional electrical circuit is a pathway along which electrons from an electric charge flow, a quantum optical circuit uses light sources that generate individual light particles, or photons, on-demand, one-at-a-time, acting as information carrying bits (quantum bits or qubits). These light sources are nano-sized semiconductor “quantum dots”-tiny manufactured collections of tens of thousands to a million atoms packed within a volume of linear size less than a thousandth of the thickness of typical human hair buried in a matrix of another suitable semiconductor.
    They have so far been proven to be the most versatile on-demand single photon generators. The optical circuit requires these single photon sources to be arranged on a semiconductor chip in a regular pattern. Photons with nearly identical wavelength from the sources must then be released in a guided direction. This allows them to be manipulated to form interactions with other photons and particles to transmit and process information.
    Until now, there has been a significant barrier to the development of such circuits. For example, in current manufacturing techniques quantum dots have different sizes and shapes and assemble on the chip in random locations. The fact that the dots have different sizes and shapes mean that the photons they release do not have uniform wavelengths. This and the lack of positional order make them unsuitable for use in the development of optical circuits.
    In recently published work, researchers at USC have shown that single photons can indeed be emitted in a uniform way from quantum dots arranged in a precise pattern. It should be noted that the method of aligning quantum dots was first developed at USC by the lead PI, Professor Anupam Madhukar, and his team nearly thirty years ago, well before the current explosive research activity in quantum information and interest in on-chip single-photon sources. In this latest work, the USC team has used such methods to create single-quantum dots, with their remarkable single-photon emission characteristics. It is expected that the ability to precisely align uniformly-emitting quantum dots will enable the production of optical circuits, potentially leading to novel advancements in quantum computing and communications technologies.

    advertisement

    The work, published in APL Photonics, was led by Jiefei Zhang, currently a research assistant professor in the Mork Family Department of Chemical Engineering and Materials Science, with corresponding author Anupam Madhukar, Kenneth T. Norris Professor in Engineering and Professor of Chemical Engineering, Electrical Engineering, Materials Science, and Physics.
    “The breakthrough paves the way to the next steps required to move from lab demonstration of single photon physics to chip-scale fabrication of quantum photonic circuits,” Zhang said. “This has potential applications in quantum (secure) communication, imaging, sensing and quantum simulations and computation.”
    Madhukar said that it is essential that quantum dots be ordered in a precise way so that photons released from any two or more dots can be manipulated to connect with each other on the chip. This will form the basis of building unit for quantum optical circuits.
    “If the source where the photons come from is randomly located, this can’t be made to happen.” Madhukar said.
    “The current technology that is allowing us to communicate online, for instance using a technological platform such as Zoom, is based on the silicon integrated electronic chip. If the transistors on that chip are not placed in exact designed locations, there would be no integrated electrical circuit,” Madhukar said. “It is the same requirement for photon sources such as quantum dots to create quantum optical circuits.”
    The research is supported by the Air Force Office of Scientific Research (AFOSR) and the U.S. Army Research Office (ARO).

    advertisement

    “This advance is an important example of how solving fundamental materials science challenges, like how to create quantum dots with precise position and composition, can have big downstream implications for technologies like quantum computing,” said Evan Runnerstrom, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This shows how ARO’s targeted investments in basic research support the Army’s enduring modernization efforts in areas like networking.”
    To create the precise layout of quantum dots for the circuits, the team used a method called SESRE (substrate-encoded size-reducing epitaxy) developed in the Madhukar group in the early 1990s. In the current work, the team fabricated regular arrays of nanometer-sized mesas with a defined edge orientation, shape (sidewalls) and depth on a flat semiconductor substrate, composed of gallium arsenide (GaAs). Quantum dots are then created on top of the mesas by adding appropriate atoms using the following technique.
    First, incoming gallium (Ga) atoms gather on the top of the nanoscale mesas attracted by surface energy forces, where they deposit GaAs. Then, the incoming flux is switched to indium (In) atoms, to in turn deposit indium arsenide (InAs) followed back by Ga atoms to form GaAs and hence create the desired individual quantum dots that end up releasing single photons. To be useful for creating optical circuits, the space between the pyramid-shaped nano-mesas needs to be filled by material that flattens the surface. The final chip where opaque GaAs is depicted as a translucent overlayer under which the quantum dots are located.
    “This work also sets a new world-record of ordered and scalable quantum dots in terms of the simultaneous purity of single-photon emission greater than 99.5%, and in terms of the uniformity of the wavelength of the emitted photons, which can be as narrow as 1.8nm, which is a factor of 20 to 40 better than typical quantum dots,” Zhang said.
    Zhang said that with this uniformity, it becomes feasible to apply established methods such as local heating or electric fields to fine-tune the photon wavelengths of the quantum dots to exactly match each other, which is necessary for creating the required interconnections between different quantum dots for circuits.
    This means that for the first time researchers can create scalable quantum photonic chips using well-established semiconductor processing techniques. In addition, the team’s efforts are now focused on establishing how identical the emitted photons are from the same and/or from different quantum dots. The degree of indistinguishability is central to quantum effects of interference and entanglement, that underpin quantum information processing -communication, sensing, imaging, or computing.
    Zhang concluded: “We now have an approach and a material platform to provide scalable and ordered sources generating potentially indistinguishable single-photons for quantum information applications. The approach is general and can be used for other suitable material combinations to create quantum dots emitting over a wide range of wavelengths preferred for different applications, for example fiber-based optical communication or the mid-infrared regime, suited for environmental monitoring and medical diagnostics,” Zhang said.
    Gernot S. Pomrenke, AFOSR Program Officer, Optoelectronics and Photonics said that reliable arrays of on-demand single photon sources on-chip were a major step forward.
    “This impressive growth and material science work stretches over three decades of dedicated effort before research activities in quantum information were in the mainstream,” Pomrenke said. “Initial AFOSR funding and resources from other DoD agencies have been critical in realizing the challenging work and vision by Madhukar, his students, and collaborators. There is a great likelihood that the work will revolutionize the capabilities of data centers, medical diagnostics, defense and related technologies.” More

  • in

    New way to power up nanomaterials for electronic applications

    UCLA materials scientists and colleagues have discovered that perovskites, a class of promising materials that could be used for low-cost, high-performance solar cells and LEDs, have a previously unutilized molecular component that can further tune the electronic property of perovskites.
    Named after Russian mineralogist Lev Perovski, perovskite materials have a crystal-lattice structure of inorganic molecules like that of ceramics, along with organic molecules that are interlaced throughout. Up to now, these organic molecules appeared to only serve a structural function and could not directly contribute to perovskites’ electronic performance.
    Led by UCLA, a new study shows that when the organic molecules are designed properly, they not only can maintain the crystal lattice structure, but also contribute to the materials’ electronic properties. This discovery opens up new possibilities to improve the design of materials that will lead to better solar cells and LEDs. The study detailing the research was recently published in Science.
    “This is like finding an old dog that can play new tricks,” said Yang Yang, the Carol and Lawrence E. Tannas Jr. Professor of Engineering at the UCLA Samueli School of Engineering, who is the principal investigator on the research. “In materials science, we look all the way down to the atomic structure of a material for efficient performance. Our postdocs and graduate students didn’t take anything for granted and dug deeper to find a new pathway.”
    In order to make a better-performing perovskite material, the researchers incorporated a specially designed organic molecule, a pyrene-containing organic ammonium. On its exterior, the positively charged ammonium molecule connected to molecules of pyrene — a quadruple ring of carbon atoms. This molecular design offered additional electronic tunability of perovskites.
    “The unique property of perovskites is that they have the advantage of high-performance inorganic semiconductors, as well as easy and low-cost processability of polymers,” said study co-lead author Rui Wang, a UCLA postdoctoral scholar in materials science and engineering. “This newly enhanced perovskite material now offers opportunities for improved design concepts with better efficiency.”
    To demonstrate perovskites’ added effectiveness, the team built a photovoltaic (PV) cell prototype with the materials, and then tested it under continuous light for 2,000 hours. The new cell continued to convert light to energy at 85% of its original efficiency. This contrasts with a PV cell made of the same materials, but without the added altered organic molecule, which retained only 60% of its original efficiency.

    Story Source:
    Materials provided by University of California – Los Angeles. Note: Content may be edited for style and length. More

  • in

    AI can make accurate assessment of whether a person will die from COVID-19, study finds

    Using patient data, artificial intelligence can make a 90 percent accurate assessment of whether a person will die from COVID-19 or not, according to new research at the University of Copenhagen. Body mass index (BMI), gender and high blood pressure are among the most heavily weighted factors. The research can be used to predict the number of patients in hospitals, who will need a respirator and determine who ought to be first in line for a vaccination.
    Artificial intelligence is able to predict who is most likely to die from the coronavirus. In doing so, it can also help decide who should be at the front of the line for the precious vaccines now being administered across Denmark.
    The result is from a newly published study by researchers at the University of Copenhagen’s Department of Computer Science. Since the COVID pandemic’s first wave, researchers have been working to develop computer models that can predict, based on disease history and health data, how badly people will be affected by COVID-19.
    Based on patient data from the Capital Region of Denmark and Region Zealand, the results of the study demonstrate that artificial intelligence can, with up to 90 percent certainty, determine whether an uninfected person who is not yet infected will die of COVID-19 or not if they are unfortunate enough to become infected. Once admitted to the hospital with COVID-19, the computer can predict with 80 percent accuracy whether the person will need a respirator.
    “We began working on the models to assist hospitals, as during the first wave, they feared that they did not have enough respirators for intensive care patients. Our new findings could also be used to carefully identify who needs a vaccine,” explains Professor Mads Nielsen of the University of Copenhagen’s Department of Computer Science.
    Older men with high blood pressure are highest at risk
    The researchers fed a computer program with health data from 3,944 Danish COVID-19 patients. This trained the computer to recognize patterns and correlations in both patients’ prior illnesses and in their bouts against COVID-19.

    advertisement

    “Our results demonstrate, unsurprisingly, that age and BMI are the most decisive parameters for how severely a person will be affected by COVID-19. But the likelihood of dying or ending up on a respirator is also heightened if you are male, have high blood pressure or a neurological disease,” explains Mads Nielsen.
    The diseases and health factors that, according to the study, have the most influence on whether a patient ends up on a respirator after being infected with COVID-19 are in order of priority: BMI, age, high blood pressure, being male, neurological diseases, COPD, asthma, diabetes and heart disease.
    “For those affected by one or more of these parameters, we have found that it may make sense to move them up in the vaccine queue, to avoid any risk of them becoming inflected and eventually ending up on a respirator,” says Nielsen.
    Predicting respiratory needs is a must
    Researchers are currently working with the Capital Region of Denmark to take advantage of this fresh batch of results in practice. They hope that artificial intelligence will soon be able to help the country’s hospitals by continuously predicting the need for respirators.
    “We are working towards a goal that we should be able to predict the need for respirators five days ahead by giving the computer access to health data on all COVID positives in the region,” says Mads Nielsen, adding:
    “The computer will never be able to replace a doctor’s assessment, but it can help doctors and hospitals see many COVID-19 infected patients at once and set ongoing priorities.”
    However, technical work is still pending to make health data from the region available for the computer and thereafter to calculate the risk to the infected patients. The research was carried out in collaboration with Rigshospitalet and Bispebjerg and Frederiksberg Hospital. More

  • in

    The Ramanujan Machine: Researchers develop 'conjecture generator'

    Using AI and computer automation, Technion researchers have developed a “conjecture generator” that creates mathematical conjectures, which are considered to be the starting point for developing mathematical theorems. They have already used it to generate a number of previously unknown formulas. The study, which was published in the journal Nature, was carried out by undergraduates from different faculties under the tutelage of Assistant Professor Ido Kaminer of the Andrew and Erna Viterbi Faculty of Electrical Engineering at the Technion.
    The project deals with one of the most fundamental elements of mathematics — mathematical constants. A mathematical constant is a number with a fixed value that emerges naturally from different mathematical calculations and mathematical structures in different fields. Many mathematical constants are of great importance in mathematics, but also in disciplines that are external to mathematics, including biology, physics, and ecology. The golden ratio and Euler’s number are examples of such fundamental constants. Perhaps the most famous constant is pi, which was studied in ancient times in the context of the circumference of a circle. Today, pi appears in numerous formulas in all branches of science, with many math aficionados competing over who can recall more digits after the decimal point: 3.14159…
    The Technion researchers proposed and examined a new idea: The use of computer algorithms to automatically generate mathematical conjectures that appear in the form of formulas for mathematical constants.
    A conjecture is a mathematical conclusion or proposition that has not been proved; once the conjecture is proved, it becomes a theorem. Discovery of a mathematical conjecture on fundamental constants is relatively rare, and its source often lies in mathematical genius and exceptional human intuition. Newton, Riemann, Goldbach, Gauss, Euler, and Ramanujan are examples of such genius, and the new approach presented in the paper is named after Srinivasa Ramanujan.
    Ramanujan, an Indian mathematician born in 1887, grew up in a poor family, yet managed to arrive in Cambridge at the age of 26 at the initiative of British mathematicians Godfrey Hardy and John Littlewood. Within a few years he fell ill and returned to India, where he died at the age of 32. During his brief life he accomplished great achievements in the world of mathematics. One of Ramanujan’s rare capabilities was the intuitive formulation of unproven mathematical formulas. The Technion research team therefore decided to name their algorithm “the Ramanujan Machine,” as it generates conjectures without proving them, by “imitating” intuition using AI and considerable computer automation.
    According to Prof. Kaminer, “Our results are impressive because the computer doesn’t care if proving the formula is easy or difficult, and doesn’t base the new results on any prior mathematical knowledge, but only on the numbers in mathematical constants. To a large degree, our algorithms work in the same way as Ramanujan himself, who presented results without proof. It’s important to point out that the algorithm itself is incapable of proving the conjectures it found — at this point, the task is left to be resolved by human mathematicians.”
    The conjectures generated by the Technion’s Ramanujan Machine have delivered new formulas for well-known mathematical constants such as pi, Euler’s number (e), Apéry’s constant (which is related to the Riemann zeta function), and the Catalan constant. Surprisingly, the algorithms developed by the Technion researchers succeeded not only in creating known formulas for these famous constants, but in discovering several conjectures that were heretofore unknown. The researchers estimate this algorithm will be able to significantly expedite the generation of mathematical conjectures on fundamental constants and help to identify new relationships between these constants.
    As mentioned, until now, these conjectures were based on rare genius. This is why in hundreds of years of research, only a few dozens of formulas were found. It took the Technion’s Ramanujan Machine just a few hours to discover all the formulas for pi discovered by Gauss, the “Prince of Mathematics,” during a lifetime of work, along with dozens of new formulas that were unknown to Gauss.
    According to the researchers, “Similar ideas can in the future lead to the development of mathematical conjectures in all areas of mathematics, and in this way provide a meaningful tool for mathematical research.”
    The research team has launched a website, RamanujanMachine.com, which is intended to inspire the public to be more involved in the advancement of mathematical research by providing algorithmic tools that will be available to mathematicians and the public at large. Even before the article was published, hundreds of students, experts, and amateur mathematicians had signed up to the website.
    The research study started out as an undergraduate project in the Rothschild Scholars Technion Program for Excellence with the participation of Gal Raayoni and George Pisha, and continued as part of the research projects conducted in the Andrew and Erna Viterbi Faculty of Electrical Engineering with the participation of Shahar Gottlieb, Yoav Harris, and Doron Haviv. This is also where the most significant breakthrough was made — by an algorithm developed by Shahar Gottlieb — which led to the article’s publication in Nature. Prof. Kaminer adds that the most interesting mathematical discovery made by the Ramanujan Machine’s algorithms to date relates to a new algebraic structure concealed within a Catalan constant. The structure was discovered by high school student Yahel Manor, who participated in the project as part of the Alpha Program for science-oriented youth. Prof. Kaminer added that, “Industry colleagues Uri Mendlovic and Yaron Hadad also participated in the study, and contributed greatly to the mathematical and algorithmic concepts that form the foundation for the Ramanujan Machine. It is important to emphasize that the entire project was executed on a voluntary basis, received no funding, and participants joined the team out of pure scientific curiosity.”
    Prof. Ido Kaminer is the head of the Robert and Ruth Magid Electron Beam Quantum Dynamics Laboratory. He is a faculty member in the Andrew and Erna Viterbi Faculty of Electrical Engineering and the Solid State Institute. Kaminer is affiliated with the Helen Diller Quantum Center and the Russell Berrie Nanotechology Institute. More

  • in

    Artificial intelligence yields new ways to combat the coronavirus

    USC researchers have developed a new method to counter emergent mutations of the coronavirus and hasten vaccine development to stop the pathogen responsible for killing thousands of people and ruining the economy.
    Using artificial intelligence (AI), the research team at the USC Viterbi School of Engineering developed a method to speed the analysis of vaccines and zero in on the best potential preventive medical therapy.
    The method is easily adaptable to analyze potential mutations of the virus, ensuring the best possible vaccines are quickly identified — solutions that give humans a big advantage over the evolving contagion. Their machine-learning model can accomplish vaccine design cycles that once took months or years in a matter of seconds and minutes, the study says.
    “This AI framework, applied to the specifics of this virus, can provide vaccine candidates within seconds and move them to clinical trials quickly to achieve preventive medical therapies without compromising safety,” said Paul Bogdan, associate professor of electrical and computer engineering at USC Viterbi and corresponding author of the study. “Moreover, this can be adapted to help us stay ahead of the coronavirus as it mutates around the world.”
    The findings appear today in Nature Research’s Scientific Reports
    When applied to SARS-CoV-2 — the virus that causes COVID-19 — the computer model quickly eliminated 95% of the compounds that could’ve possibly treated the pathogen and pinpointed the best options, the study says.

    advertisement

    The AI-assisted method predicted 26 potential vaccines that would work against the coronavirus. From those, the scientists identified the best 11 from which to construct a multi-epitope vaccine, which can attack the spike proteins that the coronavirus uses to bind and penetrate a host cell. Vaccines target the region — or epitope — of the contagion to disrupt the spike protein, neutralizing the ability of the virus to replicate.
    Moreover, the engineers can construct a new multi-epitope vaccine for a new virus in less than a minute and validate its quality within an hour. By contrast, current processes to control the virus require growing the pathogen in the lab, deactivating it and injecting the virus that caused a disease. The process is time-consuming and takes more than one year; meanwhile, the disease spreads.
    USC method could help counter COVID-19 mutations
    The method is especially useful during this stage of the pandemic as the coronavirus begins to mutate in populations around the world. Some scientists are concerned that the mutations may minimize the effectiveness of vaccines by Pfizer and Moderna, which are now being distributed. Recent variants of the virus that have emerged in the United Kingdom, South Africa and Brazil seem to spread more easily, which scientists say will rapidly lead to many more cases, deaths and hospitalizations.
    But Bogdan said that if SARS-CoV-2 becomes uncontrollable by current vaccines, or if new vaccines are needed to deal with other emerging viruses, then USC’s AI-assisted method can be used to design other preventive mechanisms quickly.

    advertisement

    For example, the study explains that the USC scientists used only one B-cell epitope and one T-cell epitope, whereas applying a bigger dataset and more possible combinations can develop a more comprehensive and quicker vaccine design tool. The study estimates the method can perform accurate predictions with over 700,000 different proteins in the dataset.
    “The proposed vaccine design framework can tackle the three most frequently observed mutations and be extended to deal with other potentially unknown mutations,” Bogdan said.
    The raw data for the research comes from a giant bioinformatics database called the Immune Epitope Database (IEDB) in which scientists around the world have been compiling data about the coronavirus, among other diseases. IEDB contains over 600,000 known epitopes from some 3,600 different species, along with the Virus Pathogen Resource, a complementary repository of information about pathogenic viruses. The genome and spike protein sequence of SARS-CoV-2 comes from the National Center for Biotechnical Information.
    COVID-19 has led to 87 million cases and more than 1.88 million deaths worldwide, including more than 400,000 fatalities in the United States. It has devastated the social, financial and political fabric of many countries.
    The study authors are Bogdan, Zikun Yang and Shahin Nazarian of the Ming Hsieh Department of Electrical and Computer Engineering at USC Viterbi.
    Support for the study comes from the National Science Foundation (NSF) under the Career Award (CPS/CNS-1453860) and NSF grants (CCF-1837131, MCB-1936775 and CNS-1932620); a U.S. Army Research Office grant (W911NF-17-1-0076); a Defense Advanced Research Projects Agency (DARPA) Young Faculty Award and Director Award grant (N66001-17-1-4044), and a Northrop Grumman grant. More

  • in

    Engineers develop programming technology to transform 2D materials into 3D shapes

    University of Texas at Arlington researchers have developed a technique that programs 2D materials to transform into complex 3D shapes.
    The goal of the work is to create synthetic materials that can mimic how living organisms expand and contract soft tissues and thus achieve complex 3D movements and functions. Programming thin sheets, or 2D materials, to morph into 3D shapes can enable new technologies for soft robotics, deployable systems, and biomimetic manufacturing, which produces synthetic products that mimic biological processes.
    Kyungsuk Yum, an associate professor in the Materials Science and Engineering Department, and his team have developed the 2D material programming technique for 3D shaping. It allows the team to print 2D materials encoded with spatially controlled in-plane growth or contraction that can transform to programmed 3D structures.
    Their research, supported by a National Science Foundation Early Career Development Award that Yum received in 2019, was published in January in Nature Communications.
    “There are a variety of 3D-shaped 2D materials in biological systems, and they play diverse functions,” Yum said. “Biological organisms often achieve complex 3D morphologies and motions of soft slender tissues by spatially controlling their expansion and contraction. Such biological processes have inspired us to develop a method that programs 2D materials with spatially controlled in-plane growth to produce 3D shapes and motions.”
    With this inspiration, the researchers developed an approach that can uniquely create 3D structures with doubly curved morphologies and motions, commonly seen in living organisms but difficult to replicate with human-made materials.
    They were able to form 3D structures shaped like automobiles, stingrays, and human faces. To physically realize the concept of 2D material programming, they used a digital light 4D printing method developed by Yum and shared in Nature Communications in 2018.
    “Our 2D-printing process can simultaneously print multiple 2D materials encoded with individually customized designs and transform them on demand and in parallel to programmed 3D structures,” said Amirali Nojoomi, Yum’s former graduate student and first author of the paper. “From a technological point of view, our approach is scalable, customizable, and deployable, and it can potentially complement existing 3D-printing methods.”
    The researchers also introduced the concept of cone flattening, where they program 2D materials using a cone surface to increase the accessible space of 3D shapes. To solve a shape selection problem, they devised shape-guiding modules in 2D material programming that steer the direction of shape morphing toward targeted 3D shapes. Their flexible 2D-printing process can also enable multimaterial 3D structures.
    “Dr. Yum’s innovative research has many potential applications that could change the way we look at soft engineering systems,” said Stathis Meletis, chair of the Materials Science and Engineering Department. “His pioneering work is truly groundbreaking.”

    Story Source:
    Materials provided by University of Texas at Arlington. Original written by Jeremy Agor. Note: Content may be edited for style and length. More

  • in

    Pushed to the limit: A CMOS-based transceiver for beyond 5G applications at 300 GHz

    Scientists at Tokyo Institute of Technology (Tokyo Tech) and NTT Corporation (NTT) develop a novel CMOS-based transceiver for wireless communications at the 300 GHz band, enabling future beyond-5G applications. Their design addresses the challenges of operating CMOS technology at its practical limit and represents the first wideband CMOS phased-array system to operate at such elevated frequencies.
    Communication at higher frequencies is a perpetually sought-after goal in electronics because of the greater data rates that would be possible and to take advantage of underutilized portions of the electromagnetic spectrum. Many applications beyond 5G, as well as the IEEE802.15.3d standard for wireless communications, call for transmitters and receivers capable of operating close to or above 300 GHz.
    Unfortunately, our trusty CMOS technology is not entirely suitable for such elevated frequencies. Near 300 GHz, amplification becomes considerably difficult. Although a few CMOS-based transceivers for 300 GHz have been proposed, they either lack enough output power, can only operate in direct line-of-sight conditions, or require a large circuit area to be implemented.
    To address these issues, a team of scientists from Tokyo Tech, in collaboration with NTT, proposed an innovative design for a 300 GHz CMOS-based transceiver. Their work will be presented in the Digests of Technical Papers in the 2021 IEEE ISSCC (International Solid-State Circuits Conference), a conference where the latest advances in solid-state and integrated circuits are exposed.
    One of the key features of the proposed design is that it is bidirectional; a great portion of the circuit, including the mixer, antennas, and local oscillator, is shared between the receiver and the transmitter. This means the overall circuit complexity and the total circuit area required are much lower than in unidirectional implementations.
    Another important aspect is the use of four antennas in a phased array configuration. Existing solutions for 300 GHz CMOS transmitters use a single radiating element, which limits the antenna gain and the system’s output power. An additional advantage is the beamforming capability of phased arrays, which allows the device to adjust the relative phases of the antenna signals to create a combined radiation pattern with custom directionality. The antennas used are stacked “Vivaldi antennas,” which can be etched directly onto PCBs, making them easy to fabricate.
    The proposed transceiver uses a subharmonic mixer, which is compatible with a bidirectional operation and requires a local oscillator with a comparatively lower frequency. However, this type of mixing results in low output power, which led the team to resort to an old yet functional technique to boost it. Professor Kenichi Okada from Tokyo Tech, who led the study, explains: “Outphasing is a method generally used to improve the efficiency of power amplifiers by enabling their operation at output powers close to the point where they no longer behave linearly — that is, without distortion. In our work, we used this approach to increase the transmitted output power by operating the mixers at their saturated output power.” Another notable feature of the new transceiver is its excellent cancellation of local oscillator feedthrough (a “leakage” from the local oscillator through the mixer and onto the output) and image frequency (a common type of interference for the method of reception used).
    The entire transceiver was implemented in an area as small as 4.17 mm2. It achieved maximum rates of 26 Gbaud for transmission and 18 Gbaud for reception, outclassing most state-of-the-art solutions. Excited about the results, Okada remarks: “Our work demonstrates the first implementation of a wideband CMOS phased-array system that operates at frequencies higher than 200 GHz.” Let us hope this study helps us squeeze more juice out of CMOS technology for upcoming applications in wireless communications!

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    'Audeo' teaches artificial intelligence to play the piano

    Anyone who’s been to a concert knows that something magical happens between the performers and their instruments. It transforms music from being just “notes on a page” to a satisfying experience.
    A University of Washington team wondered if artificial intelligence could recreate that delight using only visual cues — a silent, top-down video of someone playing the piano. The researchers used machine learning to create a system, called Audeo, that creates audio from silent piano performances. When the group tested the music Audeo created with music-recognition apps, such as SoundHound, the apps correctly identified the piece Audeo played about 86% of the time. For comparison, these apps identified the piece in the audio tracks from the source videos 93% of the time.
    The researchers presented Audeo Dec. 8 at the NeurIPS 2020 conference.
    “To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said senior author Eli Shlizerman, an assistant professor in both the applied mathematics and the electrical and computer engineering departments. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”
    Audeo uses a series of steps to decode what’s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then it needs to translate that diagram into something that a music synthesizer would actually recognize as a sound a piano would make. This second step cleans up the data and adds in more information, such as how strongly each key is pressed and for how long.
    “If we attempt to synthesize music from the first step alone, we would find the quality of the music to be unsatisfactory,” Shlizerman said. “The second step is like how a teacher goes over a student composer’s music and helps enhance it.”
    The researchers trained and tested the system using YouTube videos of the pianist Paul Barton. The training consisted of about 172,000 video frames of Barton playing music from well-known classical composers, such as Bach and Mozart. Then they tested Audeo with almost 19,000 frames of Barton playing different music from these composers and others, such as Scott Joplin.
    Once Audeo has generated a transcript of the music, it’s time to give it to a synthesizer that can translate it into sound. Every synthesizer will make the music sound a little different — this is similar to changing the “instrument” setting on an electric keyboard. For this study, the researchers used two different synthesizers.
    “Fluidsynth makes synthesizer piano sounds that we are familiar with. These are somewhat mechanical-sounding but pretty accurate,” Shlizerman said. “We also used PerfNet, a new AI synthesizer that generates richer and more expressive music. But it also generates more noise.”
    Audeo was trained and tested only on Paul Barton’s piano videos. Future research is needed to see how well it could transcribe music for any musician or piano, Shlizerman said.
    “The goal of this study was to see if artificial intelligence could generate music that was played by a pianist in a video recording — though we were not aiming to replicate Paul Barton because he is such a virtuoso,” Shlizerman said. “We hope that our study enables novel ways to interact with music. For example, one future application is that Audeo can be extended to a virtual piano with a camera recording just a person’s hands. Also, by placing a camera on top of a real piano, Audeo could potentially assist in new ways of teaching students how to play.”
    Kun Su and Xiulong Liu, both doctoral students in electrical and computer engineering, are co-authors on this paper. This research was funded by the Washington Research Foundation Innovation Fund as well as the applied mathematics and electrical and computer engineering departments.

    Story Source:
    Materials provided by University of Washington. Original written by Sarah McQuate. Note: Content may be edited for style and length. More