More stories

  • in

    Blueprints for a cheaper single-molecule microscope

    A team of scientists and students from the University of Sheffield has designed and built a specialist microscope, and shared the build instructions to help make this equipment available to many labs across the world.
    The microscope, called the smfBox, is capable of single-molecule measurements allowing scientists to look at one molecule at a time rather than generating an average result from bulk samples and works just as well as commercially available instruments.
    This single-molecule method is currently only available at a few specialist labs throughout the world due to the cost of commercially available microscopes.
    Today (6 November 2020), the team has published a paper in the journal Nature Communications which provides all the build instructions and software needed to run the microscope, to help make this single-molecule method accessible to labs across the world.
    The interdisciplinary team spanning the University of Sheffield’s Departments of Chemistry and Physics, and the Central Laser Facility at the Rutherford Appleton Laboratory, spent a relatively modest £40,000 to build a piece of kit that would normally cost around £400,000 to buy.
    The microscope was built with simplicity in mind so that researchers interested in biological problems can use it with little training, and the lasers have been shielded in such a way that it can be used in normal lighting conditions, and is no more dangerous than a CD player.
    Dr Tim Craggs, the lead academic on the project from the University of Sheffield, said: “We wanted to democratise single-use molecule measurements to make this method available for many labs, not just a few labs throughout the world. This work takes what was a very expensive, specialist piece of kit, and gives every lab the blueprint and software to build it for themselves, at a fraction of the cost.
    “Many medical diagnostics are moving towards increased sensitivity, and there is nothing more sensitive than detecting single molecules. In fact, many new COVID tests currently under development work at this level. This instrument is a good starting point for further development towards new medical diagnostics.”
    The original smfBox was built by a team of academics and undergraduate students at the University of Sheffield.
    Ben Ambrose, the PhD lead on the project, said: “This project was an excellent opportunity to work with researchers at all levels, from undergraduates to scientists in national facilities. Between biophysicists and engineers, we have created a new and accessible platform to do some cutting edge science without breaking the bank. We are already starting to do some great work with this microscope ourselves, but I am excited to see what it will do in the hands of other labs who have already begun to build their own.”
    The Craggs Lab at the University of Sheffield has already used the smfBox in its research to investigate fundamental biological processes, such as DNA damage detection, where improved understanding in this field could lead to better therapies for diseases including cancer.

    Story Source:
    Materials provided by University of Sheffield. Note: Content may be edited for style and length. More

  • in

    Know when to unfold 'em: Applying particle physics methods to quantum computing

    Borrowing a page from high-energy physics and astronomy textbooks, a team of physicists and computer scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has successfully adapted and applied a common error-reduction technique to the field of quantum computing.
    In the world of subatomic particles and giant particle detectors, and distant galaxies and giant telescopes, scientists have learned to live, and to work, with uncertainty. They are often trying to tease out ultra-rare particle interactions from a massive tangle of other particle interactions and background “noise” that can complicate their hunt, or trying to filter out the effects of atmospheric distortions and interstellar dust to improve the resolution of astronomical imaging.
    Also, inherent problems with detectors, such as with their ability to record all particle interactions or to exactly measure particles’ energies, can result in data getting misread by the electronics they are connected to, so scientists need to design complex filters, in the form of computer algorithms, to reduce the margin of error and return the most accurate results.
    The problems of noise and physical defects, and the need for error-correction and error-mitigation algorithms, which reduce the frequency and severity of errors, are also common in the fledgling field of quantum computing, and a study published in the journal npj Quantum Information found that there appear to be some common solutions, too.
    Ben Nachman, a Berkeley Lab physicist who is involved with particle physics experiments at CERN as a member of Berkeley Lab’s ATLAS group, saw the quantum-computing connection while working on a particle physics calculation with Christian Bauer, a Berkeley Lab theoretical physicist who is a co-author of the study. ATLAS is one of the four giant particle detectors at CERN’s Large Hadron Collider, the largest and most powerful particle collider in the world.
    “At ATLAS, we often have to ‘unfold,’ or correct for detector effects,” said Nachman, the study’s lead author. “People have been developing this technique for years.”
    In experiments at the LHC, particles called protons collide at a rate of about 1 billion times per second. To cope with this incredibly busy, “noisy” environment and intrinsic problems related to the energy resolution and other factors associated with detectors, physicists use error-correcting “unfolding” techniques and other filters to winnow down this particle jumble to the most useful, accurate data.

    advertisement

    “We realized that current quantum computers are very noisy, too,” Nachman said, so finding a way to reduce this noise and minimize errors — error mitigation — is a key to advancing quantum computing. “One kind of error is related to the actual operations you do, and one relates to reading out the state of the quantum computer,” he noted — that first kind is known as a gate error, and the latter is called a readout error.
    The latest study focuses on a technique to reduce readout errors, called “iterative Bayesian unfolding” (IBU), which is familiar to the high-energy physics community. The study compares the effectiveness of this approach to other error-correction and mitigation techniques. The IBU method is based on Bayes’ theorem, which provides a mathematical way to find the probability of an event occurring when there are other conditions related to this event that are already known.
    Nachman noted that this technique can be applied to the quantum analog of classical computers, known as universal gate-based quantum computers.
    In quantum computing, which relies on quantum bits, or qubits, to carry information, the fragile state known as quantum superposition is difficult to maintain and can decay over time, causing a qubit to display a zero instead of a one — this is a common example of a readout error.
    Superposition provides that a quantum bit can represent a zero, a one, or both quantities at the same time. This enables unique computing capabilities not possible in conventional computing, which rely on bits representing either a one or a zero, but not both at once. Another source of readout error in quantum computers is simply a faulty measurement of a qubit’s state due to the architecture of the computer.

    advertisement

    In the study, researchers simulated a quantum computer to compare the performance of three different error-correction (or error-mitigation or unfolding) techniques. They found that the IBU method is more robust in a very noisy, error-prone environment, and slightly outperformed the other two in the presence of more common noise patterns. Its performance was compared to an error-correction method called Ignis that is part of a collection of open-source quantum-computing software development tools developed for IBM’s quantum computers, and a very basic form of unfolding known as the matrix inversion method.
    The researchers used the simulated quantum-computing environment to produce more than 1,000 pseudo-experiments, and they found that the results for the IBU method were the closest to predictions. The noise models used for this analysis were measured on a 20-qubit quantum computer called IBM Q Johannesburg.
    “We took a very common technique from high-energy physics, and applied it to quantum computing, and it worked really well — as it should,” Nachman said. There was a steep learning curve. “I had to learn all sorts of things about quantum computing to be sure I knew how to translate this and to implement it on a quantum computer.”
    He said he was also very fortunate to find collaborators for the study with expertise in quantum computing at Berkeley Lab, including Bert de Jong, who leads a DOE Office of Advanced Scientific Computing Research Quantum Algorithms Team and an Accelerated Research for Quantum Computing project in Berkeley Lab’s Computational Research Division.
    “It’s exciting to see how the plethora of knowledge the high-energy physics community has developed to get the most out of noisy experiments can be used to get more out of noisy quantum computers,” de Jong said.
    The simulated and real quantum computers used in the study varied from five qubits to 20 qubits, and the technique should be scalable to larger systems, Nachman said. But the error-correction and error-mitigation techniques that the researchers tested will require more computing resources as the size of quantum computers increases, so Nachman said the team is focused on how to make the methods more manageable for quantum computers with larger qubit arrays.
    Nachman, Bauer, and de Jong also participated in an earlier study that proposes a way to reduce gate errors, which is the other major source of quantum-computing errors. They believe that error correction and error mitigation in quantum computing may ultimately require a mix-and-match approach — using a combination of several techniques.
    “It’s an exciting time,” Nachman said, as the field of quantum computing is still young and there is plenty of room for innovation. “People have at least gotten the message about these types of approaches, and there is still room for progress.” He noted that quantum computing provided a “push to think about problems in a new way,” adding, “It has opened up new science potential.” More

  • in

    Nervous systems of insects inspire efficient future AI systems

    Zoologists at the University of Cologne studied the nervous systems of insects to investigate principles of biological brain computation and possible implications for machine learning and artificial intelligence. Specifically, they analysed how insects learn to associate sensory information in their environment with a food reward, and how they can recall this information later in order to solve complex tasks such as the search for food. The results suggest that the transformation of sensory information into memories in the brain can inspire future machine learning and artificial intelligence applications to solving complex tasks. The study has been published in the journal PNAS.
    Living organisms show remarkable abilities in coping with problems posed by complex and dynamic environments. They are able to generalize their experiences in order to rapidly adapt their behaviour when the environment changes. The zoologists investigated how the nervous system of the fruit fly controls its behaviour when searching for food. Using a computer model, they simulated and analysed the computations in the fruit fly’s nervous system in response to scents emanated from the food source. ‘We initially trained our model of the fly brain in exactly the same way as insects are trained in experiments. We presented a specific scent in the simulation together with a reward and a second scent without a reward. The model rapidly learns a robust representation of the rewarded scent after just a few scent presentations and is then able to find the source of this scent in a spatially complex and temporally dynamic environment,’ said computer scientist Dr Hannes Rapp, who created the model as part of his doctoral thesis at the UoC’s Institute of Zoology.
    The model created is thus capable to generalize from its memory and to apply what it has learned previously in a completely new and complex odour molecule landscape, while learning required only a very small database of training samples. ‘For our model, we exploit the special properties of biological information processing in nervous systems,’ explained Professor Dr Martin Nawrot, senior author of the study. ‘These are in particular a fast and parallel processing of sensory stimuli by means of brief nerve impulses as well as the formation of a distributed memory through the simultaneous modification of many synaptic contacts during the learning process.’ The theoretical principles underlying this model can also be used for artificial intelligence and autonomous systems. They enable an artificial agent to learn much more efficiently and to apply what it has learned in a changing environment.

    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More

  • in

    Two motivational artificial beings are better than one for enhancing learning

    Social rewards such as praise are known to enhance various stages of the learning process. Now, researchers from Japan have found that praise delivered by artificial beings such as robots and virtual graphics-based agents can have effects similar to praise delivered by humans, with important practical applications as social services such as education increasingly move to virtual and online platforms.
    In a study published this month in PLOS ONE, researchers from the University of Tsukuba have shown that motor task performance in participants was significantly enhanced by praise from either one or two robots or virtual agents.
    Although praise from robots and virtual agents has been found to enhance human motivation and performance during a task, whether these interactions have similar effects on offline skill consolidation, which is an essential component of the learning process, has not been investigated. Further, the various conditions associated with the delivery of praise by robot and virtual agents have not been thoroughly explored previously. The researchers at the University of Tsukuba aimed to address these questions in the present study.
    “Previous studies have shown that praise from others can positively affect offline improvements in human motor skills,” says first author Masahiro Shiomi. “However, whether praise from artificial beings can have similar effects on offline improvements has not been explored previously.”
    To examine these questions, the researchers asked participants to learn a finger-tapping task under several different conditions, which varied in terms of the timing and frequency of praise, the number of agents, and whether the agents were physically present or presented on a screen. The participants were then asked to repeat the task on the following day, and task performance was compared between the two days.
    “We found that praise led to a measurable increase in task performance, indicating increased offline consolidation of the task,” explains Professor Takamasa Iio. “Further, two agents led to significantly greater participant performance than one agent, even when the amount of praise was identical.”
    However, whether the praise was delivered by physical robots or by virtual agents did not influence the effects.
    “Our study showed that praise from artificial beings improved skill consolidation in a manner that resembled praise delivered by humans,” says first author Masahiro Shiomi. “Such findings may be useful for facilitating learning in children, for instance, or for exercise and rehabilitation applications.”
    Future work could consider the effects of praise delivered in different environments, for instance, in a VR environment, as well as the effects of greater numbers of agents. A greater understanding of the factors that influence the social effects of robot behavior is essential for improving the quality of human-robot interactions, which are increasingly an important element of education, services, and entertainment applications.

    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Next-generation computer chip with two heads

    It’s a major breakthrough in the field of electronics. Engineers at EPFL’s Laboratory of Nanoscale Electronics and Structures (LANES) have developed a next-generation circuit that allows for smaller, faster and more energy-efficient devices — which would have major benefits for artificial-intelligence systems. Their revolutionary technology is the first to use a 2D material for what’s called a logic-in-memory architecture, or a single architecture that combines logic operations with a memory function. The research team’s findings appear today in Nature.
    Until now, the energy efficiency of computer chips has been limited by the von Neumann architecture they currently use, where data processing and data storage take place in two separate units. That means data must constantly be transferred between the two units, using up a considerable amount of time and energy.
    By combining the two units into a single structure, engineers can reduce these losses. That’s the idea behind the new chip developed at EPFL, although it goes one step beyond existing logic-in-memory devices. The EPFL chip is made from MoS2, which is a 2D material consisting of a single layer that’s only three atoms thick. It’s also an excellent semi-conductor. LANES engineers had already studied the specific properties of MoS2 a few years ago, finding that it is particularly well-suited to electronics applications. Now the team has taken that initial research further to create their next-generation technology.
    The EPFL chip is based on floating-gate field-effect transistors (FGFETs). The advantage of these transistors is that they can hold electric charges for long periods; they are typically used in flash memory systems for cameras, smartphones and computers. The unique electrical proprieties of MoS2 make it particularly sensitive to charges stored in FGFETs, which is what enabled the LANES engineers to develop circuits that work as both memory storage units and programmable transistors. By using MoS2, they were able to incorporate numerous processing functions into a single circuit and then change them as desired.
    In-depth expertise
    “This ability for circuits to perform two functions is similar to how the human brain works, where neurons are involved in both storing memories and conducting mental calculations,” says Andras Kis, the head of LANES. “Our circuit design has several advantages. It can reduce the energy loss associated with transferring data between memory units and processors, cut the amount of time needed for computing operations and shrink the amount of space required. That opens the door to devices that are smaller, more powerful and more energy efficient.”
    The LANES research team has also acquired in-depth expertise in fabricating circuits out of 2D materials. “We made our first chip ten years ago by hand,” says Kis. “But we have since developed an advanced fabrication process that lets us make 80 or more chips in a single run, with well-controlled properties.”

    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Sarah Perrin. Note: Content may be edited for style and length. More

  • in

    Why big-box chains' embrace of in-store click-and-collect leaves money on the table

    Researchers from University of North Carolina-Chapel Hill and Tilburg University published a new paper in the Journal of Marketing that explores the rise of click-and-collect services and examines their most appropriate settings.
    The study, forthcoming in the Journal of Marketing, is titled “Navigating the Last Mile in Grocery Shopping: The Click and Collect Format” and is authored by Katrijn Gielens, Els Gijsbrechts, and Inge Geyskens.
    Big box stores have spent years developing technology capabilities to compete with Amazon and other digitally savvy competitors. While no one could have foreseen Covid-19, these chains’ investments in click-and-collect technology allowed them to cash in when the coronavirus pushed sales online.
    Order fulfilment is a costly and difficult challenge that must be mastered for online grocery success. The rise of click-and-collect services help overcome this roadblock by having shoppers help fulfill the goods by placing orders online and picking up the goods themselves.
    Click-and-collect services can be offered in different ways and through different formats, all of which come with vastly different levels of investments and cost structures. Not surprisingly, many retailers, rushing into the click-and-collect fray, are opting for the lower cash- and capital-intensive options such as in-store and curbside pickup. However, not all click-and-collect formats offer the same convenience benefits to shoppers. Sales outcomes may therefore widely differ. Retailers may thus want to contemplate how to organize these click-and-collect services in a sustainable and profitable way to safeguard the longer-run success and viability of the format. The study offers advice to retailers on whether and how to implement click-and- collect. To that end, the researchers gauged how shoppers’ online and total spending changes after they start using three different click-and collect formats, specifically: (1) in-store, i.e. pickup at existing stores; 2) near-store, i.e., pickup at outlets adjoining stores, also known as ‘curbside’; and 3) stand-alone click-and-collect, i.e. pickup at free-standing locations.
    Do these click-and-collect types address the same needs? Gielens says the answer is, No! “The different formats address fundamentally different shopper needs in terms of fulfillment convenience.” Fulfillment convenience touches upon three different benefits offered to shoppers:
    Access convenience: the reduction of time to, at, and from a click-and-collect location.
    Collection convenience: the reduction of physical effort to collect the order.

    advertisement

    Adjustment convenience: the ease with which shoppers can adjust their online orders by adding, returning, or replacing items.

    Depending on shoppers’ needs for these different convenience benefits, click-and-collect results in vastly different performance outcomes. This calls for judicious alignment of the right click-and-collect format with local-market needs.
    Overall, does click-and-collect increase shoppers’ online and total spending? The study shows that click-and-collect can be an effective means to boost online spending at the retailer. Hence, click-and-collect may indeed be the long-awaited road to online success for grocery retailers, overcoming the last-mile problems associated with home delivery. Moreover, by blending the convenience benefits of home delivery and brick-and-mortar, click-and-collect can also enhance households’ total spending at the retailer and thus constitute a profitable addition to the retailer’s channel mix.
    What is the best click-and-collect format for access-convenience-oriented markets? In markets with high access-convenience needs, such as rural markets with many weekend shoppers, both in-store and stand-alone click-and-collect do well. The time-efficient pickup of stand-alones stimulates these shoppers to spend more at the retailer online. In-store pickup, in turn, leads to positive spillovers to the retailer’s brick-and-mortar stores and, hence, an increase in total spending.
    What is the best click-and-collect format for collection-convenience-oriented markets? Stand-alone click-and-collects best serve needs in these markets with a predominance of large-basket shoppers buying more bulky items. In these markets, the lower physical shopping effort combined with the time-savings of not having to drive to a regular store, make stand-alones particularly appealing — resulting in the highest extra total spending at the retailer.
    What is the best click-and-collect format for adjustment-convenience-oriented markets? Stand-alone and near-store yield the highest total retailer sales in these markets where larger households that shop more for perishables and buy more on impulse tend to live. While in-store leads shoppers in these markers to spend more online, it also cannibalizes their brick-and-mortar purchases. Even worse, it may even decrease total spending at the retailer and should therefore be avoided.
    What are the key takeaways for practitioners? Gijsbrechts explains that “We provide grocery retailers with insights on how to avoid costly mistakes when kick starting click-and-collect. As retailers race to build click-and-collects, they are mostly opting for fulfillment within existing stores for the sake of quick, low-cost roll-out. Indeed, since in-store click-and-collect can rely on existing infrastructure and processes, it is the easiest to implement. However, the pursuit of speed without knowing which type is best in terms of demand may lead to the demise of the format.” Also, while most retailers tend to opt for one type of click-and-collect across all markets, a one-size-fits-all approach is not advisable. Instead, the impact depends on shoppers’ needs for fulfillment convenience. This study helps retailers find the right mix. More

  • in

    Building a quantum network one node at a time

    Researchers at the University of Rochester and Cornell University have taken an important step toward developing a communications network that exchanges information across long distances by using photons, mass-less measures of light that are key elements of quantum computing and quantum communications systems.
    The research team has designed a nanoscale node made out of magnetic and semiconducting materials that could interact with other nodes, using laser light to emit and accept photons.
    The development of such a quantum network — designed to take advantage of the physical properties of light and matter characterized by quantum mechanics — promises faster, more efficient ways to communicate, compute, and detect objects and materials as compared to networks currently used for computing and communications.
    Described in the journal Nature Communications, the node consists of an array of pillars a mere 120 nanometers high. The pillars are part of a platform containing atomically thin layers of semiconductor and magnetic materials.
    The array is engineered so that each pillar serves as a location marker for a quantum state that can interact with photons and the associated photons can potentially interact with other locations across the device — and with similar arrays at other locations. This potential to connect quantum nodes across a remote network capitalizes on the concept of entanglement, a phenomenon of quantum mechanics that, at its very basic level, describes how the properties of particles are connected at the subatomic level.
    “This is the beginnings of having a kind of register, if you like, where different spatial locations can store information and interact with photons,” says Nick Vamivakas, professor of quantum optics and quantum physics at Rochester.

    advertisement

    Toward ‘miniaturizing a quantum computer’
    The project builds on work the Vamivakas Lab has conducted in recent years using tungsten diselenide (WSe2) in so-called Van der Waals heterostructures. That work uses layers of atomically thin materials on top of each other to create or capture single photons.
    The new device uses a novel alignment of WSe2 draped over the pillars with an underlying, highly reactive layer of chromium triiodide (CrI3). Where the atomically thin, 12-micron area layers touch, the CrI3 imparts an electric charge to the WSe2, creating a “hole” alongside each of the pillars.
    In quantum physics, a hole is characterized by the absence of an electron. Each positively charged hole also has a binary north/south magnetic property associated with it, so that each is also a nanomagnet
    When the device is bathed in laser light, further reactions occur, turning the nanomagnets into individual optically active spin arrays that emit and interact with photons. Whereas classical information processing deals in bits that have values of either 0 or 1, spin states can encode both 0 and 1 at the same time, expanding the possibilities for information processing.

    advertisement

    “Being able to control hole spin orientation using ultrathin and 12-micron large CrI3, replaces the need for using external magnetic fields from gigantic magnetic coils akin to those used in MRI systems,” says lead author and graduate student Arunabh Mukherjee. “This will go a long way in miniaturizing a quantum computer based on single hole spins. ”
    Still to come: Entanglement at a distance?
    Two major challenges confronted the researchers in creating the device.
    One was creating an inert environment in which to work with the highly reactive CrI3. This was where the collaboration with Cornell University came into play. “They have a lot of expertise with the chromium triiodide and since we were working with that for the first time, we coordinated with them on that aspect of it,” Vamivakas says. For example, fabrication of the CrI3 was done in nitrogen-filled glove boxes to avoid oxygen and moisture degradation.
    The other challenge was determining just the right configuration of pillars to ensure that the holes and spin valleys associated with each pillar could be properly registered to eventually link to other nodes.
    And therein lies the next major challenge: finding a way to send photons long distances through an optical fiber to other nodes, while preserving their properties of entanglement.
    “We haven’t yet engineered the device to promote that kind of behavior,” Vamivakas says. “That’s down the road.”

    Story Source:
    Materials provided by University of Rochester. Original written by Bob Marcotte. Note: Content may be edited for style and length. More

  • in

    Research lays groundwork for ultra-thin, energy efficient photodetector on glass

    Though we may not always realize it, photodetectors contribute greatly to the convenience of modern life. Also known as photosensors, photodetectors convert light energy into electrical signals to complete tasks such as opening automatic sliding doors and automatically adjusting a cell phone’s screen brightness in different lighting conditions.
    A new paper, published by a team of Penn State researchers in ACS Nano, seeks to further advance photodetectors’ use by integrating the technology with durable Gorilla glass, the material used for smart phone screens that is manufactured by Corning Incorporated.
    The integration of photodetectors with Gorilla glass could lead to the commercial development of “smart glass,” or glass equipped with automatic sensing properties. Smart glass has a number of applications ranging from imaging to advanced robotics, according to the researchers.
    “There are two problems to address when attempting to manufacture and scale photodetectors on glass,” said principal investigator Saptarshi Das, assistant professor of engineering science and mechanics (ESM).?”It must be done using relatively low temperatures, as the glass degrades at high temperatures, and must ensure the photodetector can operate on glass using minimal energy.”
    To overcome the first challenge, Das, along with ESM doctoral student Joseph R. Nasr, determined that the chemical compound molybdenum disulfide was the best material to use as a coating on the glass.
    Then, Joshua Robinson, professor of materials science and engineering (MatSE) and MatSE doctoral student Nicholas Simonson used a chemical reactor at 600 degrees Celsius — a low enough temperature so as not to degrade the Gorilla glass — to fuse together the compound and glass. The next step was to turn the glass and coating into a photodetector by patterning it using a conventional electron beam lithography tool.
    “We then tested the glass using green LED lighting, which mimics a more natural lighting source unlike laser lighting, which is commonly used in similar optoelectronics research,” Nasr said.
    The ultra-thin body of the molybdenum disulfide photodetectors allows for better electrostatic control, and ensures it can operate with low power — a critical need for the smart glass technology of the future.
    “The photodetectors need to work in resource-constrained or inaccessible locations that by nature do not have access to sources of unrestricted electricity,” Das said. “Therefore, they need to rely on pre-storing their own energy in the form of wind or solar energy.”
    If developed commercially, smart glass could lead to technology advances in wide-ranging sectors of industry including in manufacturing, civil infrastructure, energy, health care, transportation and aerospace engineering, according to the researchers. The technology could be applied in biomedical imaging, security surveillance, environmental sensing, optical communication, night vision, motion detection and collision avoidance systems for autonomous vehicles and robots.
    “Smart glass on car windshields could adapt to oncoming high-beam headlights when driving at night by automatically shifting its opacity using the technology,” Robinson said. “And new Boeing 757 planes could utilize the glass on their windows for pilots and passengers to automatically dim sunlight.”

    Story Source:
    Materials provided by Penn State. Original written by Mariah Chuprinski. Note: Content may be edited for style and length. More