More stories

  • in

    Damatically lowering costs of semiconductor electron sources

    Rice University engineers have discovered technology that could slash the cost of semiconductor electron sources, key components in devices ranging from night-vision goggles and low-light cameras to electron microscopes and particle accelerators.
    In an open-access Nature Communications paper, Rice researchers and collaborators at Los Alamos National Laboratory (LANL) describe the first process for making electron sources from halide perovskite thin films that efficiently convert light into free electrons.
    Manufacturers spend billions of dollars each year on photocathode electron sources made from semiconductors containing rare elements like gallium, selenium, cadmium and tellurium.
    “This should be orders of magnitude lower in cost than what exists today in the market,” said study co-corresponding author Aditya Mohite, a Rice materials scientist and chemical engineer. He said the halide perovskites have the potential to outperform existing semiconductor electron sources in several ways.
    “First, there’s the combination of quantum efficiency and lifetime,” Mohite said. “Even through this was a proof-of-concept, and the first demonstration of halide perovskites as electron sources, quantum efficiency was only about four times lower than that of commercially available gallium arsenide photocathodes. And we found halide perovskites had a longer lifetime than gallium arsenide.”
    Another advantage is that perovskite photocathodes are made by spin coating, a low-cost method that can easily be scaled up, said Mohite, an associate professor of chemical and biomolecular engineering and of materials science and nanoengineering.

    advertisement

    “We also found that degraded perovskite photocathodes can be easily regenerated compared to conventional materials that usually require high-temperature annealing,” he said.
    The researchers tested dozens of halide perovskite photocathodes, some with quantum efficiencies as high as 2.2%. They demonstrated their method by creating photocathodes with both inorganic and organic components, and showed they could tune electron emission over both the visible and ultraviolet spectrum.
    Quantum efficiency describes how effective a photocathode is at converting light to useable electrons.
    “If each incoming photon generates an electron and you collected every electron, you would have 100% quantum efficiency,” said study lead author Fangze Liu, a postdoctoral research associate at LANL. “The best semiconductor photocathodes today have quantum efficiencies around 10-20%, and they are all made of extremely expensive materials using complex fabrication processes. Metals are also sometimes used as electron sources, and the quantum efficiency of copper is very small, about .01%, but it’s still used, and it’s a practical technology.”
    The cost savings from halide perovskite photocathodes would come in two forms: the raw materials for making them are abundant and inexpensive, and the manufacturing process is simpler and less expensive than for traditional semiconductors.

    advertisement

    “There is a tremendous need for something that is low-cost and that can be scaled up,” Mohite said. “Using solution-processed materials, where you can literally paint a large area, is completely unheard of for making the kind of high-quality semiconductors needed for photocathodes.”
    The name ‘perovskite’ refers to both a specific mineral discovered in Russia in 1839 and any compound with the crystal structure of that mineral. Halide perovskites are the latter, and can be made by mixing lead, tin and other metals with bromide or iodide salts.
    Research into halide perovskite semiconductors took off worldwide after scientists in the United Kingdom used sheetlike crystals of the material to make high-efficiency solar cells in 2012. Other labs have since shown the materials can be used to make LEDs, photodetectors, photoelectrochemical cells for water-splitting and other devices.
    Mohite, an expert in perovskites who worked as a research scientist at LANL prior to joining Rice in 2018, said one reason the halide perovskite photocathode project succeeded is that his collaborators in LANL’s Applied Cathode Enhancement and Robustness Technologies research group are “one of the best teams in the world for exploring new materials and technologies for photocathodes.”
    Photocathodes operate according to Einstein’s photoelectric effect, releasing free electrons when they are struck by light of a particular frequency. The reason quantum efficiencies of photocathodes are typically low is because even the slightest defects, like a single atom out of place in the crystal lattice, can create “potential wells” that trap free electrons.
    “If you have defects, all your electrons are going to get lost,” Mohite said. “It takes a lot of control. And it took a lot of effort to come up with a process to make a good perovskite material.”
    Mohite and Liu used spin-coating, a widely used technique where liquid is dropped onto a rapidly spinning disk and centrifugal force spreads the liquid across the disk’s surface. In Mohite and Liu’s experiments, spin-coating took place in an argon atmosphere to limit impurities. Once spun, the disks were heated and placed in high vacuum to convert the liquid into crystal with a clean surface.
    “It took a lot of iterations,” Mohite said. “We tried tuning the material composition and surface treatment in many ways to get the right combination for maximum efficiency. That was the biggest challenge.”
    He said the team is already working to improve the quantum efficiency of its photocathodes.
    “Their quantum efficiency is still lower than state-of-the-art semiconductors, and we proposed in our paper that this is due to the presence of high surface defects,” he said. “The next step is to fabricate high-quality perovskite crystals with lower surface defect densities.” More

  • in

    Solving complex physics problems at lightning speed

    A calculation so complex that it takes twenty years to complete on a powerful desktop computer can now be done in one hour on a regular laptop. Physicist Andreas Ekström at Chalmers University of Technology, together with international research colleagues, has designed a new method to calculate the properties of atomic nuclei incredibly quickly.
    The new approach is based on a concept called emulation, where an approximate calculation replaces a complete and more complex calculation. Although the researchers are taking a shortcut, the solution ends up almost exactly the same. It is reminiscent of algorithms from machine learning, but ultimately the researchers have designed a completely new method. It opens up even more possibilities in fundamental research in areas such as nuclear physics.
    “Now that we can emulate atomic nuclei using this method, we have a completely new tool to construct and analyse theoretical descriptions of the forces between protons and neutrons inside the atomic nucleus,” says research leader Andreas Ekström, Associate Professor at the Department of Physics at Chalmers.
    Fundamental to understanding our existence
    The subject may sound niche, but it is in fact fundamental to understanding our existence and the stability and origin of visible matter. Most of the atomic mass resides in the centre of the atom, in a dense region called the atomic nucleus. The constituent particles of the nucleus, the protons and neutrons, are held together by something called the strong force. Although this force is so central to our existence, no one knows exactly how it works. To increase our knowledge and unravel the fundamental properties of visible matter, researchers need to be able to model the properties of atomic nuclei with great accuracy.
    The basic research that Andreas Ekström and his colleagues are working on sheds new light on topics ranging from neutron stars and their properties, to the innermost structure and decay of nuclei. Basic research in nuclear physics also provides essential input to astrophysics, atomic physics, and particle physics.
    Opening doors to completely new possibilities
    “I am incredibly excited to be able to make calculations with such accuracy and efficiency. Compared with our previous methods, it feels like we are now computing at lightning speed. In our ongoing work here at Chalmers, we hope to improve the emulation method further, and perform advanced statistical analyses of our quantum mechanical models. With this emulation method it appears that we can achieve results that were previously considered impossible. This certainly opens doors to completely new possibilities,” says Andreas Ekström.
    The project is funded by the European Research Council within the framework of an ERC Starting Grant.
    More on the mathematical shortcut
    The new emulation method is based on something called eigenvector continuation (EVC). It allows for emulation of many quantum mechanical properties of atomic nuclei with incredible speed and accuracy. Instead of directly solving the time-consuming and complex many-body problem over and over again, researchers have created a mathematical shortcut, using a transformation into a special subspace. This makes it possible to utilise a few exact solutions in order to then obtain approximate solutions much faster.
    If the emulator works well, it generates solutions that are almost exactly — circa 99 per cent — similar to the solutions to the original problem. This is in many ways the same principles used in machine learning, but it is not a neural network or a Gaussian process — a completely new method underpins it. The EVC method for emulation is not limited to atomic nuclei, and the researchers are currently looking further into different types of applications.

    Story Source:
    Materials provided by Chalmers University of Technology. Note: Content may be edited for style and length. More

  • in

    Breakthrough in mobile determination of QT prolongation

    Researchers from Mayo Clinic and AliveCor Inc. have been using artificial intelligence (AI) to develop a mobile device that can identify certain patients at risk of sudden cardiac death. This research has yielded a breakthrough in determining the health of the electrical recharging system in a patient’s heart. The researchers determined that a smartphone-enabled mobile EKG device can rapidly and accurately determine a patient’s QTc, thereby identifying patients at risk of sudden cardiac death from congenital long QT syndrome (LQTS) or drug-induced QT prolongation.
    The heart beats by a complex system of electrical signals triggering regular and necessary contractions. Clinicians evaluate the heart’s rate-corrected QT interval, or QTc, as a vital health barometer of the heart’s electrical recharging system. A potentially dangerous prolonged QTc, which is equal to or longer than 50 milliseconds, can be caused by:
    More than 100 drugs approved by the Food and Drug Administration (FDA).
    Genetics, including congenital long QT syndrome.
    Many systemic diseases, including even SARS-CoV-2-mediated COVID-19.
    Such a prolonged QTc can predispose people to dangerously fast and chaotic heartbeats, and even sudden cardiac death. For over 100 years, QTc assessment and monitoring has relied heavily on the 12-lead electrocardiogram (EKG). But that could be about to change, according to this research.
    Under the direction of Michael Ackerman, M.D., Ph.D., a genetic cardiologist at Mayo Clinic, researchers trained and validated an AI-based deep neural network to detect QTc prolongation using AliveCor’s KardiaMobile 6L EKG device. The findings, which were published in Circulation, compared the ability of an AI-enabled mobile EKG to a traditional 12-lead EKG in detecting QT prolongation.
    “This collaborative effort with investigators from academia and industry has yielded what I call a ‘pivot’ discovery,” says Dr. Ackerman, who is director of Mayo Clinic’s Windland Smith Rice Comprehensive Sudden Cardiac Death Program. “Whereby, we will pivot from the old way that we have been obtaining the QTc to this new way. Since Einthoven’s first major EKG paper in 1903, 2021 will mark the new beginning for the QT interval.”
    The team used more than 1.6 million 12-lead EKGs from over a half-million patients to train and validate an AI-based deep neural network to recognize and accurately measure the QTc. Next this newly developed AI-based QTc assessment ?the “QT meter” ? was tested prospectively on nearly 700 patients evaluated by Dr. Ackerman in Mayo Clinic’s Windland Smith Rice Genetic Heart Rhythm Clinic. Half of these patients had congenital long QT syndrome.

    advertisement

    The object was to compare the QTc values from a 12-lead EKG to those from the prototype hand-held EKG device used with a smartphone. Both sets of EKGs were given at the same clinical visit, typically within five minutes of each other.
    The AI algorithm’s ability to recognize clinically meaningful QTc prolongation on a mobile EKG device was similar to the EKG assessments made by a trained QT expert and a commercial laboratory specializing in QTc measurements for drug studies. The mobile device effectively detected a QTc value of greater than or equal to 500 milliseconds, performing with:
    80% sensitivity This means that fewer cases of QTc prolongation were missed.
    94.4% specificity
    This means that it was highly accurate in predicting who did not have a prolonged QTc.
    “The ability to equip mobile EKG devices with accurate AI-powered approaches capable of calculating accurately the QTc represents a potential paradigm shift regarding how and where the QT interval can be assessed,” says John Giudicessi, M.D., Ph.D., a Mayo Clinic cardiology fellow and first author of the study.
    “Currently, AliveCor’s KardiaMobile 6L EKG device is FDA-cleared for detection of atrial fibrillation, bradycardia, and tachycardia. Once FDA clearance is received for this AI-based QTc assessment, we will have a true QT meter that can enable this emerging vital sign to be obtained easily and accurately,” says Dr. Ackerman. “Akin to a glucose meter for diabetics, for example, this QT meter will provide an early warning system, enabling patients with congenital or acquired LQTS to be identified and potentially lifesaving adjustments to their medications and electrolytes to be made.”
    “This point-of-care application of artificial intelligence is massively scalable, since it is linked to a smartphone. It can save lives by telling a person that a specific medication may be harmful before he or she takes the first pill,” says Paul Friedman, M.D., chair of the Department of Cardiovascular Medicine at Mayo Clinic in Rochester. “This allows a potentially life threatening condition to be detected before symptoms are manifest.”
    “Regularly monitoring for LQTS using KardiaMobile 6L allows for accurate, real-time data collection outside the walls of a hospital,” says David Albert, M.D., founder and chief medical officer at AliveCor Inc. “Because LQTS can be intermittent and elusive, the ability to detect this rhythm abnormality without a 12-lead EKG — which requires the patient be in-hospital — can improve patient outcomes and save on hospital resources, while still providing the reliable and timely data physicians and their patients need.”
    This research was sponsored by the Mayo Clinic Windland Smith Rice Comprehensive Sudden Cardiac Death Program. Mayo Clinic; Zachi Attia, Ph.D.; Peter Noseworthy, M.D.; Dr. Ackerman; and Dr. Friedman have a financial interest with AliveCor, Inc. related to this research.

    Story Source:
    Materials provided by Mayo Clinic. Original written by Terri Malloy. Note: Content may be edited for style and length. More

  • in

    Photonics for artificial intelligence and neuromorphic computing

    Scientists have given a fascinating new insight into the next steps to develop fast, energy-efficient, future computing systems that use light instead of electrons to process and store information — incorporating hardware inspired directly by the functioning of the human brain.
    A team of scientists, including Professor C. David Wright from the University of Exeter, has explored the future potential for computer systems by using photonics in place of conventional electronics.
    The article is published today (January 29th 2021) in the journal Nature Photonics.
    The study focuses on potential solutions to one of the world’s most pressing computing problems — how to develop computing technologies to process this data in a fast and energy efficient way.
    Contemporary computers are based on the von Neumann architecture in which the fast Central Processing Unit (CPU) is physically separated from the much slower program and data memory.
    This means computing speed is limited and power is wasted by the need to continuously transfer data to and from the memory and processor over bandwidth-limited and energy-inefficient electrical interconnects — known as the von Neumann bottleneck.
    As a result, it has been estimated that more than 50 % of the power of modern computing systems is wasted simply in this moving around of data.
    Professor C David Wright, from the University of Exeter’s Department of Engineering, and one of the co-authors of the study explains “Clearly, a new approach is needed — one that can fuse together the core information processing tasks of computing and memory, one that can incorporate directly in hardware the ability to learn, adapt and evolve, and one that does away with energy-sapping and speed-limiting electrical interconnects.”
    Photonic neuromorphic computing is one such approach. Here, signals are communicated and processed using light rather than electrons, giving access to much higher bandwidths (processor speeds) and vastly reducing energy losses.
    Moreover, the researchers try to make the computing hardware itself isomorphic with biological processing system (brains), by developing devices to directly mimic the basic functions of brain neurons and synapses, then connecting these together in networks that can offer fast, parallelised, adaptive processing for artificial intelligence and machine learning applications.

    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    Chumash Indians were using highly worked shell beads as currency 2,000 years ago

    As one of the most experienced archaeologists studying California’s Native Americans, Lynn Gamble(link is external) knew the Chumash Indians had been using shell beads as money for at least 800 years.
    But an exhaustive review of some of the shell bead record led the UC Santa Barbara professor emerita of anthropology to an astonishing conclusion: The hunter-gatherers centered on the Southcentral Coast of Santa Barbara were using highly worked shells as currency as long as 2,000 years ago.
    “If the Chumash were using beads as money 2,000 years ago,” Gamble said, “this changes our thinking of hunter-gatherers and sociopolitical and economic complexity. This may be the first example of the use of money anywhere in the Americas at this time.”
    Although Gamble has been studying California’s indigenous people since the late 1970s, the inspiration for her research on shell bead money came from far afield: the University of Tübingen in Germany. At a symposium there some years ago, most of the presenters discussed coins and other non-shell forms of money. Some, she said, were surprised by the assumptions of California archaeologists about what constituted money.
    Intrigued, she reviewed the definitions and identifications of money in California and questioned some of the long-held beliefs. Her research led to “The origin and use of shell bead money in California” in the Journal of Anthropological Archaeology.
    Gamble argues that archaeologists should use four criteria in assessing whether beads were used for currency versus adornment: Shell beads used as currency should be more labor-intensive than those for decorative purposes; highly standardized beads are likely currency; bigger, eye-catching beads were more likely used as decoration; and currency beads are widely distributed.

    advertisement

    “I then compared the shell beads that had been accepted as a money bead for over 40 years by California archaeologists to another type that was widely distributed,” she said. “For example, tens of thousands were found with just one individual up in the San Francisco Bay Area. This bead type, known as a saucer bead, was produced south of Point Conception and probably on the northern [Santa Barbara] Channel Islands, according to multiple sources of data, at least most, if not all of them.
    “These earlier beads were just as standardized, if not more so, than those that came 1,000 years later,” Gamble continued. “They also were traded throughout California and beyond. Through sleuthing, measurements and comparison of standardizations among the different bead types, it became clear that these were probably money beads and occurred much earlier than we previously thought.”
    As Gamble notes, shell beads have been used for over 10,000 years in California, and there is extensive evidence for the production of some of these beads, especially those common in the last 3,000 to 4,000 years, on the northern Channel Islands. The evidence includes shell bead-making tools, such as drills, and massive amounts of shell bits — detritus — that littered the surface of archaeological sites on the islands.
    In addition, specialists have noted that the isotopic signature of the shell beads found in the San Francisco Bay Area indicate that the shells are from south of Point Conception.
    “We know that right around early European contact,” Gamble said, “the California Indians were trading for many types of goods, including perishable foods. The use of shell beads no doubt greatly facilitated this wide network of exchange.”
    Gamble’s research not only resets the origins of money in the Americas, it calls into question what constitutes “sophisticated” societies in prehistory. Because the Chumash were non-agriculturists — hunter-gatherers — it was long held that they wouldn’t need money, even though early Spanish colonizers marveled at extensive Chumash trading networks and commerce.
    Recent research on money in Europe during the Bronze Age suggests it was used there some 3,500 years ago. For Gamble, that and the Chumash example are significant because they challenge a persistent perspective among economists and some archaeologists that so-called “primitive” societies could not have had “commercial” economies.
    “Both the terms ‘complex’ and ‘primitive’ are highly charged, but it is difficult to address this subject without avoiding those terms,” she said. “In the case of both the Chumash and the Bronze Age example, standardization is a key in terms of identifying money. My article on the origin of money in California is not only pushing the date for the use of money back 1,000 years in California, and possibly the Americas, it provides evidence that money was used by non-state level societies, commonly identified as ‘civilizations.’ ” More

  • in

    How the brain is programmed for computer programming?

    Countries around the world are seeing a surge in the number of computer science students. Enrolment in related university programs in the U.S. and Canada tripled between 2006-2016 and Europe too has seen rising numbers. At the same time, the age to start coding is becoming younger and younger because governments in many different countries are pushing K-12 computer science education. Despite the increasing popularity of computer programming, little is known about how our brains adapt to this relatively new activity. A new study by researchers in Japan has examined the brain activity of thirty programmers of diverse levels of expertise, finding that seven regions of the frontal, parietal and temporal cortices in expert programmer’s brain are fine-tuned for programming. The finding suggests that higher programming skills are built upon fine-tuned brain activities on a network of multiple distributed brain regions.
    “Many studies have reported differences between expert and novice programmers in behavioural performance, knowledge structure and selective attention. What we don’t know is where in the brain these differences emerge,” says Takatomi Kubo, an associate professor at Nara Institute of Science and Technology, Japan, and one of the lead authors of the study.
    To answer this question, the researchers observed groups of novices, experienced, and expert programmers. The programmers were shown 72 different code snippets while under the observation of functional MRI (fMRI) and asked to place each snippet into one of four functional categories. As expected, programmers with higher skills were better at correctly categorizing the snippets. A subsequent searchlight analysis revealed that the amount of information in seven brain regions strengthened with the skill level of the programmer: the bilateral inferior frontal gyrus pars triangularis (IFG Tri), left inferior parietal lobule (IPL), left supramarginal gyrus (SMG), left middle and inferior temporal gyri (MTG/IT), and right middle frontal gyrus (MFG).
    “Identifying these characteristics in expert programmers’ brains offers a good starting point for understanding the cognitive mechanisms behind programming expertise. Our findings illuminate the potential set of cognitive functions constituting programming expertise,” Kubo says.
    More specifically, the left IFG Tri and MTG are known to be associated with natural language processing and, in particular, semantic knowledge retrieval in a goal-oriented way. The left IPL and SMG are associated with episodic memory retrieval. The right MFG and IFG Tri are functionally related to stimulus-driven attention control.
    “Programming is a relatively new activity in human history and the mechanism is largely unknown. Connecting the activity to other well-known human cognitive functions will improve our understanding of programming expertise. If we get more comprehensive theory about programming expertise, it will lead to better methods for learning and teaching computer programming,” Kubo says.

    Story Source:
    Materials provided by Nara Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    'Liquid' machine-learning system adapts to changing conditions

    MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed “liquid” networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.
    “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” says Ramin Hasani, the study’s lead author. “The potential is really significant.”
    The research will be presented at February’s AAAI Conference on Artificial Intelligence. In addition to Hasani, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT co-authors include Daniela Rus, CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and PhD student Alexander Amini. Other co-authors include Mathias Lechner of the Institute of Science and Technology Austria and Radu Grosu of the Vienna University of Technology.
    Time series data are both ubiquitous and vital to our understanding the world, according to Hasani. “The real world is all about sequences. Even our perception — you’re not perceiving images, you’re perceiving sequences of images,” he says. “So, time series data actually create our reality.”
    He points to video processing, financial data, and medical diagnostic applications as examples of time series that are central to society. The vicissitudes of these ever-changing data streams can be unpredictable. Yet analyzing these data in real time, and using them to anticipate future behavior, can boost the development of emerging technologies like self-driving cars. So Hasani built an algorithm fit for the task.
    Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of “training” examples. They’re often said to mimic the processing pathways of the brain — Hasani drew inspiration directly from the microscopic nematode, C. elegans. “It only has 302 neurons in its nervous system,” he says, “yet it can generate unexpectedly complex dynamics.”
    Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations.

    advertisement

    This flexibility is key. Most neural networks’ behavior is fixed after the training phase, which means they’re bad at adjusting to changes in the incoming data stream. Hasani says the fluidity of his “liquid” network makes it more resilient to unexpected or noisy data, like if heavy rain obscures the view of a camera on a self-driving car. “So, it’s more robust,” he says.
    There’s another advantage of the network’s flexibility, he adds: “It’s more interpretable.”
    Hasani says his liquid network skirts the inscrutability common to other neural networks. “Just changing the representation of a neuron,” which Hasani did with the differential equations, “you can really explore some degrees of complexity you couldn’t explore otherwise.” Thanks to Hasani’s small number of highly expressive neurons, it’s easier to peer into the “black box” of the network’s decision making and diagnose why the network made a certain characterization.
    “The model itself is richer in terms of expressivity,” says Hasani. That could help engineers understand and improve the liquid network’s performance.
    Hasani’s network excelled in a battery of tests. It edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns. “In many applications, we see the performance is reliably high,” he says. Plus, the network’s small size meant it completed the tests without a steep computing cost. “Everyone talks about scaling up their network,” says Hasani. “We want to scale down, to have fewer but richer nodes.”
    Hasani plans to keep improving the system and ready it for industrial application. “We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process,” he says. “The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.”
    This research was funded, in part, by Boeing, the National Science Foundation, the Austrian Science Fund, and Electronic Components and Systems for European Leadership. More

  • in

    A metalens for virtual and augmented reality

    Despite all the advances in consumer technology over the past decades, one component has remained frustratingly stagnant: the optical lens. Unlike electronic devices, which have gotten smaller and more efficient over the years, the design and underlying physics of today’s optical lenses haven’t changed much in about 3,000 years.
    This challenge has caused a bottleneck in the development of next-generation optical systems such as wearable displays for virtual reality, which require compact, lightweight, and cost-effective components.
    At the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), a team of researchers led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, has been developing the next generation of lenses that promise to open that bottleneck by replacing bulky curved lenses with a simple, flat surface that uses nanostructures to focus light.
    In 2018, the Capasso’s team developed achromatic, aberration-free metalenses that work across the entire visible spectrum of light. But these lenses were only tens of microns in diameter, too small for practical use in VR and augmented reality systems.
    Now, the researchers have developed a two-millimeter achromatic metalenses that can focus RGB (red, blue, green) colors without aberrations and developed a miniaturized display for virtual and augmented reality applications.
    The research is published in Science Advances.

    advertisement

    “This state-of-the-art lens opens a path to a new type of virtual reality platform and overcomes the bottleneck that has slowed the progress of new optical device,” said Capasso, the senior author of the paper.
    “Using new physics and a new design principle, we have developed a flat lens to replace the bulky lenses of today’s optical devices,” said Zhaoyi Li, a postdoctoral fellow at SEAS and first author of the paper. “This is the largest RGB-achromatic metalens to date and is a proof of concept that these lenses can be scaled up to centimeter size, mass produced, and integrated in commercial platforms.”
    Like previous metalenses, this lens uses arrays of titanium dioxide nanofins to equally focus wavelengths of light and eliminate chromatic aberration. By engineering the shape and pattern of these nanoarrays, the researchers could control the focal length of red, green and blue color of light. To incorporate the lens into a VR system, the team developed a near-eye display using a method called fiber scanning.
    The display, inspired by fiber-scanning-based endoscopic bioimaging techniques, uses an optical fiber through a piezoelectric tube. When a voltage is applied onto the tube, the fiber tip scans left and right and up and down to display patterns, forming a miniaturized display. The display has high resolution, high brightness, high dynamic range, and wide color gamut.
    In a VR or AR platform, the metalens would sit directly in front of the eye, and the display would sit within the focal plane of the metalens. The patterns scanned by the display are focused onto the retina, where the virtual image forms, with the help of the metalens. To the human eye, the image appears as part of the landscape in the AR mode, some distance from our actual eyes.

    advertisement

    “We have demonstrated how meta-optics platforms can help resolve the bottleneck of current VR technologies and potentially be used in our daily life,” said Li.
    Next, the team aims to scale up the lens even further, making it compatible with current large-scale fabrication techniques for mass production at a low cost.
    The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.
    The research was co-authored by Yao-Wei Huang, Joon-Suh Park, Wei Ting Chen, and Zhujun Shi from Harvard University, Peng Lin and Ji-Xin Cheng from Boston University, and Cheng-Wei Qiu from the National University of Singapore.
    The research was supported in part by the Defense Advanced Research Projects Agency under award no. HR00111810001, the National Science Foundation under award no. 1541959 and the SAMSUNG GRO research program under award no. A35924. More