More stories

  • in

    AI technique ‘decodes’ microscope images, overcoming fundamental limit

    Atomic force microscopy, or AFM, is a widely used technique that can quantitatively map material surfaces in three dimensions, but its accuracy is limited by the size of the microscope’s probe. A new AI technique overcomes this limitation and allows microscopes to resolve material features smaller than the probe’s tip.
    The deep learning algorithm developed by researchers at the University of Illinois Urbana-Champaign is trained to remove the effects of the probe’s width from AFM microscope images. As reported in the journal Nano Letters, the algorithm surpasses other methods in giving the first true three-dimensional surface profiles at resolutions below the width of the microscope probe tip.
    “Accurate surface height profiles are crucial to nanoelectronics development as well as scientific studies of material and biological systems, and AFM is a key technique that can measure profiles noninvasively,” said Yingjie Zhang, a U. of I. materials science & engineering professor and the project lead. “We’ve demonstrated how to be even more precise and see things that are even smaller, and we’ve shown how AI can be leveraged to overcome a seemingly insurmountable limitation.”
    Often, microscopy techniques can only provide two-dimensional images, essentially providing researchers with aerial photographs of material surfaces. AFM provides full topographical maps accurately showing the height profiles of the surface features. These three-dimensional images are obtained by moving a probe across the material’s surface and measuring its vertical deflection.
    If surface features approach the size of the probe’s tip — about 10 nanometers — then they cannot be resolved by the microscope because the probe becomes too large to “feel out” the features. Microscopists have been aware of this limitation for decades, but the U. of I. researchers are the first to give a deterministic solution.
    “We turned to AI and deep learning because we wanted to get the height profile — the exact roughness — without the inherent limitations of more conventional mathematical methods,” said Lalith Bonagiri, a graduate student in Zhang’s group and the study’s lead author.
    The researchers developed a deep learning algorithm with an encoder-decoder framework. It first “encodes” raw AFM images by decomposing them into abstract features. After the feature representation is manipulated to remove the undesired effects, it is then “decoded” back into a recognizable image.

    To train the algorithm, the researchers generated artificial images of three-dimensional structures and simulated their AFM readouts. The algorithm was then constructed to transform the simulated AFM images with probe-size effects and extract the underlying features.
    “We actually had to do something nonstandard to achieve this,” Bonagiri said. “The first step of typical AI image processing is to rescale the brightness and contrast of the images against some standard to simplify comparisons. In our case, though, the absolute brightness and contrast is the part that’s meaningful, so we had to forgo that first step. That made the problem much more challenging.”
    To test their algorithm, the researchers synthesized gold and palladium nanoparticles with known dimensions on a silicon host. The algorithm successfully removed the probe tip effects and correctly identified the three-dimensional features of the nanoparticles.
    “We’ve given a proof-of-concept and shown how to use AI to significantly improve AFM images, but this work is only the beginning,” Zhang said. “As with all AI algorithms, we can improve it by training it on more and better data, but the path forward is clear.”
    The experiments were carried out in the Carl R. Woese Institute for Genomic Biology and the Materials Research Laboratory at the U. of I.
    Support was provided by the National Science Foundation and the Arnold and Mabel Beckman Foundation. More

  • in

    New AI model could streamline operations in a robotic warehouse

    Hundreds of robots zip back and forth across the floor of a colossal robotic warehouse, grabbing items and delivering them to human workers for packing and shipping. Such warehouses are increasingly becoming part of the supply chain in many industries, from e-commerce to automotive production.
    However, getting 800 robots to and from their destinations efficiently while keeping them from crashing into each other is no easy task. It is such a complex problem that even the best path-finding algorithms struggle to keep up with the breakneck pace of e-commerce or manufacturing.
    In a sense, these robots are like cars trying to navigate a crowded city center. So, a group of MIT researchers who use AI to mitigate traffic congestion applied ideas from that domain to tackle this problem.
    They built a deep-learning model that encodes important information about the warehouse, including the robots, planned paths, tasks, and obstacles, and uses it to predict the best areas of the warehouse to decongest to improve overall efficiency.
    Their technique divides the warehouse robots into groups, so these smaller groups of robots can be decongested faster with traditional algorithms used to coordinate robots. In the end, their method decongests the robots nearly four times faster than a strong random search method.
    In addition to streamlining warehouse operations, this deep learning approach could be used in other complex planning tasks, like computer chip design or pipe routing in large buildings.
    “We devised a new neural network architecture that is actually suitable for real-time operations at the scale and complexity of these warehouses. It can encode hundreds of robots in terms of their trajectories, origins, destinations, and relationships with other robots, and it can do this in an efficient manner that reuses computation across groups of robots,” says Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

    Wu, senior author of a paper on this technique, is joined by lead author Zhongxia Yan, a graduate student in electrical engineering and computer science. The work will be presented at the International Conference on Learning Representations.
    Robotic Tetris
    From a bird’s eye view, the floor of a robotic e-commerce warehouse looks a bit like a fast-paced game of “Tetris.”
    When a customer order comes in, a robot travels to an area of the warehouse, grabs the shelf that holds the requested item, and delivers it to a human operator who picks and packs the item. Hundreds of robots do this simultaneously, and if two robots’ paths conflict as they cross the massive warehouse, they might crash.
    Traditional search-based algorithms avoid potential crashes by keeping one robot on its course and replanning a trajectory for the other. But with so many robots and potential collisions, the problem quickly grows exponentially.
    “Because the warehouse is operating online, the robots are replanned about every 100 milliseconds. That means that every second, a robot is replanned 10 times. So, these operations need to be very fast,” Wu says.

    Because time is so critical during replanning, the MIT researchers use machine learning to focus the replanning on the most actionable areas of congestion — where there exists the most potential to reduce the total travel time of robots.
    Wu and Yan built a neural network architecture that considers smaller groups of robots at the same time. For instance, in a warehouse with 800 robots, the network might cut the warehouse floor into smaller groups that contain 40 robots each.
    Then, it predicts which group has the most potential to improve the overall solution if a search-based solver were used to coordinate trajectories of robots in that group.
    An iterative process, the overall algorithm picks the most promising robot group with the neural network, decongests the group with the search-based solver, then picks the next most promising group with the neural network, and so on.
    Considering relationships
    The neural network can reason about groups of robots efficiently because it captures complicated relationships that exist between individual robots. For example, even though one robot may be far away from another initially, their paths could still cross during their trips.
    The technique also streamlines computation by encoding constraints only once, rather than repeating the process for each subproblem. For instance, in a warehouse with 800 robots, decongesting a group of 40 robots requires holding the other 760 robots as constraints. Other approaches require reasoning about all 800 robots once per group in each iteration.
    Instead, the researchers’ approach only requires reasoning about the 800 robots once across all groups in each iteration.
    “The warehouse is one big setting, so a lot of these robot groups will have some shared aspects of the larger problem. We designed our architecture to make use of this common information,” she adds.
    They tested their technique in several simulated environments, including some set up like warehouses, some with random obstacles, and even maze-like settings that emulate building interiors.
    By identifying more effective groups to decongest, their learning-based approach decongests the warehouse up to four times faster than strong, non-learning-based approaches. Even when they factored in the additional computational overhead of running the neural network, their approach still solved the problem 3.5 times faster.
    In the future, the researchers want to derive simple, rule-based insights from their neural model, since the decisions of the neural network can be opaque and difficult to interpret. Simpler, rule-based methods could also be easier to implement and maintain in actual robotic warehouse settings.
    This work was supported by Amazon and the MIT Amazon Science Hub. More

  • in

    Researchers look at environmental impacts of AI tools

    As artificial intelligence (AI) is increasingly used in radiology, researchers caution that it’s essential to consider the environmental impact of AI tools, according to a focus article published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    Health care and medical imaging significantly contribute to the greenhouse gas (GHG) emissions fueling global climate change. AI tools can improve both the practice of and sustainability in radiology through optimized imaging protocols resulting in shorter scan times, improved scheduling efficiency to reduce patient travel, and the integration of decision-support tools to reduce low-value imaging. But there is a downside to AI utilization.
    “Medical imaging generates a lot of greenhouse gas emissions, but we often don’t think about the environmental impact of associated data storage and AI tools,” said Kate Hanneman, M.D., M.P.H., vice chair of research and associate professor at the University of Toronto and deputy lead of sustainability at the Joint Department of Medical Imaging, Toronto General Hospital. “The development and deployment of AI models consume large amounts of energy, and the data storage needs in medical imaging and AI are growing exponentially.”
    Dr. Hanneman and a team of researchers looked at the benefits and downsides of incorporating AI tools into radiology. AI offers the potential to improve workflows, accelerate image acquisition, reduce costs and improve the patient experience. However, the energy required to develop AI tools and store the associated data significantly contributes to GHG.
    “We need to do a balancing act, bridging to the positive effects while minimizing the negative impacts,” Dr. Hanneman said. “Improving patient outcomes is our ultimate goal, but we want to do that while using less energy and generating less waste.”
    Developing AI models requires large amounts of training data that health care institutions must store along with the billions of medical images generated annually. Many health systems use cloud storage, meaning the data is stored off-site and accessed electronically when needed.
    “Even though we call it cloud storage, data are physically housed in centers that typically require large amounts of energy to power and cool,” Dr. Hanneman said. “Recent estimates suggest that the total global GHG emissions from all data centers is greater than the airline industry, which is absolutely staggering.”
    The location of a data center has a massive impact on its sustainability, especially if it’s in a cooler climate or in an area where renewable energy sources are available.

    To minimize the overall environmental impact of data storage, the researchers recommended sharing resources and, where possible, collaborating with other providers and partners to distribute the expended energy more broadly.
    To decrease GHG emissions from data storage and the AI model development process, the researchers also offered other suggestions. These included exploring computationally efficient AI algorithms, selecting hardware that requires less energy, using data compression techniques, removing redundant data, implementing tiered storage systems and partnering with providers that use renewable energy.
    “Departments that manage their cloud storage can take immediate action by choosing a sustainable partner,” she said.
    Dr. Hanneman said although challenges and knowledge gaps remain, including limited data on radiology specific GHG emissions, resource constraints and complex regulations, she hopes sustainability will become a quality metric in the decision-making process around AI and radiology.
    “Environmental costs should be considered along with financial costs in health care and medical imaging,” she said. “I believe AI can help us improve sustainability if we apply the tools judiciously. We just need to be mindful and aware of its energy usage and GHG emissions.” More

  • in

    Diamonds are a chip’s best friend

    New technologies aim to produce high-purity synthetic crystals that become excellent semiconductors when doped with impurities as electron donors or acceptors of other elements. Researchers led by Kyoto University has now determined the magnitude of the spin-orbit interaction in acceptor-bound excitons in a semiconductor. They broke through the energy resolution limit of conventional luminescence measurements by directly observing the fine structure of bound excitons in boron-doped blue diamond, using optical absorption.
    Besides being “a girl’s best friend,” diamonds have broad industrial applications, such as in solid-state electronics. New technologies aim to produce high-purity synthetic crystals that become excellent semiconductors when doped with impurities as electron donors or acceptors of other elements.
    These extra electrons — or holes — do not participate in atomic bonding but sometimes bind to excitons — quasi-particles consisting of an electron and an electron hole — in semiconductors and other condensed matter. Doping may cause physical changes, but how the exciton complex — a bound state of two positively-charged holes and one negatively-charged electron — manifests in diamonds doped with boron has remained unconfirmed. Two conflicting interpretations exist of the exciton’s structure.
    An international team of researchers led by Kyoto University has now determined the magnitude of the spin-orbit interaction in acceptor-bound excitons in a semiconductor.
    “We broke through the energy resolution limit of conventional luminescence measurements by directly observing the fine structure of bound excitons in boron-doped blue diamond, using optical absorption,” says team leader Nobuko Naka of KyotoU’s Graduate School of Science.
    “We hypothesized that, in an exciton, two positively charged holes are more strongly bound than an electron-and-hole pair,” adds first author Shinya Takahashi. “This acceptor-bound exciton structure yielded two triplets separated by a spin-orbit splitting of 14.3 meV, supporting the hypothesis.”
    Luminescence resulting from thermal excitation can be used to observe high-energy states, but this current measurement method broadens spectral lines and blurs ultra-fine splitting.
    Instead, Naka’s team cooled the diamond crystal to cryogenic temperatures, obtaining nine peaks on the deep-ultraviolet absorption spectrum, compared to the usual four using luminescence. In addition, the researchers developed an analytical model including the spin-orbit effect to predict the energy positions and absorption intensities.
    “In future studies, we are considering the possibility of measuring absorption under external fields, leading to further line splitting and validation due to changes in symmetry,” says Université Paris-Saclay’s Julien Barjon.
    “Our results provide useful insights into spin-orbit interactions in systems beyond solid-state materials, such as atomic and nuclear physics. A deeper understanding of materials may improve the performance of diamond devices, such as light-emitting diodes, quantum emitters, and radiation detectors,” notes Naka. More

  • in

    Pythagoras was wrong: there are no universal musical harmonies, new study finds

    The tone and tuning of musical instruments has the power to manipulate our appreciation of harmony, new research shows. The findings challenge centuries of Western music theory and encourage greater experimentation with instruments from different cultures.
    According to the Ancient Greek philosopher Pythagoras, ‘consonance’ — a pleasant-sounding combination of notes — is produced by special relationships between simple numbers such as 3 and 4. More recently, scholars have tried to find psychological explanations, but these ‘integer ratios’ are still credited with making a chord sound beautiful, and deviation from them is thought to make music ‘dissonant’, unpleasant sounding.
    But researchers from Cambridge University, Princeton and the Max Planck Institute for Empirical Aesthetics, have now discovered two key ways in which Pythagoras was wrong.
    Their study, published in Nature Communications, shows that in normal listening contexts, we do not actually prefer chords to be perfectly in these mathematical ratios.
    “We prefer slight amounts of deviation. We like a little imperfection because this gives life to the sounds, and that is attractive to us,” said co-author, Dr Peter Harrison, from Cambridge University’s Faculty of Music and Director of its Centre for Music and Science.
    The researchers also found that the role played by these mathematical relationships disappears when you consider certain musical instruments that are less familiar to Western musicians, audiences and scholars. These instruments tend to be bells, gongs, types of xylophones and other kinds of pitched percussion instruments. In particular, they studied the ‘bonang’, an instrument from the Javanese gamelan built from a collection of small gongs.
    “When we use instruments like the bonang, Pythagoras’s special numbers go out the window and we encounter entirely new patterns of consonance and dissonance,” Dr Harrison said.

    “The shape of some percussion instruments means that when you hit them, and they resonate, their frequency components don’t respect those traditional mathematical relationships. That’s when we find interesting things happening.”
    “Western research has focused so much on familiar orchestral instruments, but other musical cultures use instruments that, because of their shape and physics, are what we would call ‘inharmonic’.
    The researchers created an online laboratory in which over 4,000 people from the US and South Korea participated in 23 behavioural experiments. Participants were played chords and invited to give each a numeric pleasantness rating or to use a slider to adjust particular notes in a chord to make it sound more pleasant. The experiments produced over 235,000 human judgments.
    The experiments explored musical chords from different perspectives. Some zoomed in on particular musical intervals and asked participants to judge whether they preferred them perfectly tuned, slightly sharp or slightly flat. The researchers were surprised to find a significant preference for slight imperfection, or ‘inharmonicity’. Other experiments explored harmony perception with Western and non-Western musical instruments, including the bonang.
    Instinctive appreciation of new kinds of harmony
    The researchers found that the bonang’s consonances mapped neatly onto the particular musical scale used in the Indonesian culture from which it comes. These consonances cannot be replicated on a Western piano, for instance, because they would fall between the cracks of the scale traditionally used.

    “Our findings challenge the traditional idea that harmony can only be one way, that chords have to reflect these mathematical relationships. We show that there are many more kinds of harmony out there, and that there are good reasons why other cultures developed them,” Dr Harrison said.
    Importantly, the study suggests that its participants — not trained musicians and unfamiliar with Javanese music — were able to appreciate the new consonances of the bonang’s tones instinctively.
    “Music creation is all about exploring the creative possibilities of a given set of qualities, for example, finding out what kinds of melodies can you play on a flute, or what kinds of sounds can you make with your mouth,” Harrison said.
    “Our findings suggest that if you use different instruments, you can unlock a whole new harmonic language that people intuitively appreciate, they don’t need to study it to appreciate it. A lot of experimental music in the last 100 years of Western classical music has been quite hard for listeners because it involves highly abstract structures that are hard to enjoy. In contrast, psychological findings like ours can help stimulate new music that listeners intuitively enjoy.”
    Exciting opportunities for musicians and producers
    Dr Harrison hopes that the research will encourage musicians to try out unfamiliar instruments and see if they offer new harmonies and open up new creative possibilities.
    “Quite a lot of pop music now tries to marry Western harmony with local melodies from the Middle East, India, and other parts of the world. That can be more or less successful, but one problem is that notes can sound dissonant if you play them with Western instruments.
    “Musicians and producers might be able to make that marriage work better if they took account of our findings and considered changing the ‘timbre’, the tone quality, by using specially chosen real or synthesised instruments. Then they really might get the best of both worlds: harmony and local scale systems.”
    Harrison and his collaborators are exploring different kinds of instruments and follow-up studies to test a broader range of cultures. In particular, they would like to gain insights from musicians who use ‘inharmonic’ instruments to understand whether they have internalised different concepts of harmony to the Western participants in this study. More

  • in

    Maths: Smart learning software helps children during lockdowns — and beyond

    Intelligent tutoring systems for math problems helped pupils remain or even increase their performance during the Corona pandemic. This is the conclusion of a new study led by the Martin Luther University Halle-Wittenberg (MLU) and Loughborough University in the UK. As part of their work, the researchers analysed data from five million exercises done by around 2,700 pupils in Germany over a period of five years. The study found that particularly lower-performing children benefit if they use the software regularly. The paper was published in the journal Computers and Education Open.
    Intelligent tutoring systems are digital learning platforms that children can use to complete maths problems. “The advantage of those rapid learning aids is that pupils receive immediate feedback after they submit their solution. If a solution is incorrect, the system will provide further information about the pupil’s mistake. If certain errors are repeated, the system recognises a deficit and provides further problem sets that address the issue,” explains Assistant Professor Dr Markus Spitzer, a psychologist at MLU. Teachers could also use the software to discover possible knowledge gaps in their classes and adapt their lessons accordingly.
    For the new study, Spitzer and his colleague Professor Korbinian Moeller from Loughborough University used data from “Bettermarks,” a large commercial provider of such tutoring systems in Germany. The team analysed the performance of pupils before, during and after the first two coronavirus lockdowns. Their analysis included data from around 2,700 children who solved more than five million problems. The data was collected between January 2017 and the end of May 2021. “This longer timeframe allowed us to observe the pupils’ performance trajectories over several years and analyse them in a wider context,” says Spitzer.
    The students’ performance was shown to remain constant throughout the period. “The fact that their performance didn’t drop during the lockdowns is a win in and of itself. But our analysis also shows that lower-performing children even managed to narrow the gap between themselves and higher achieving pupils,” Spitzer concludes.
    According to the psychologist, intelligent tutoring systems are a useful addition to conventional maths lessons. “The use of tutoring systems varies greatly from state to state. However, our study suggests that their use should be expanded across the board,” explains Spitzer. The systems could also help during future school closures, for example in the event of extreme weather conditions, transport strikes or similar events. More

  • in

    Visual prosthesis simulator offers a glimpse into the future

    In collaboration with their colleagues at the Donders Institute, researchers at the Netherlands Institute for Neuroscience have developed a simulator that enables artificial visual observations for research into the visual prosthesis. This open source tool is available to researchers and offers those who are interested insight into the future application.
    Blindness affects approximately forty million people worldwide and is expected to become increasingly common in the coming years. Patients with a damaged visual system can be broadly divided into two groups: those in whom the damage is located in front of or in the photoreceptors of the retina; and those in whom the damage is further along in the visual system. Various retinal prostheses have been developed for the first group of patients in recent years and clinical tests are underway. The problems for the second group are more difficult to tackle.
    A potential solution for these patients is to stimulate the cerebral cortex. By implanting electrodes in the brain’s visual cortex and stimulating the surrounding tissue with weak electrical currents, tiny points of light known as ‘phosphenes’ can be generated. This prosthesis converts camera input into electrical stimulation of the cerebral cortex. In doing so, it bypasses part of the affected visual system and thus allow some form of vision. You could compare it with a matrix sign along the highway, where individual lights form a combined image.
    How we can ensure that such an implant can actually be used to navigate the street or read texts remains an important question. Maureen van der Grinten and Antonia Lozano, from Pieter Roelfsema’s group, along with colleagues from the Donder’s Institute, are members of a large European consortium. This consortium is working on a prosthesis that focuses on the visual cerebral cortex. Maureen van der Grinten emphasizes: “At the moment there is a discrepancy between the amount of electrodes we can implant in people and the functionalities we would like to test. The hardware is simply not far enough yet. To bridge this gap, the process is often imitated through a simulation.”
    Simulated Phosphene Vision
    “Instead of waiting until blind people have received implants, we’re trying to simulate the situation based on the knowledge we have. We can use that as a basis to see how many points of light people need to find a door for example. We call this ‘simulated phosphene vision’. So far this has only been tested with simple shapes: 200 light points that are neatly-orientated, rectangular pixels of equal size on a screen. People can test this with VR glasses, which is very useful, but does not correspond to the actual vision of blind people with a prosthesis.”
    “To make our simulation more realistic, we collected a whole load of literature, created and validated models and looked at the extent to which the results correspond to the effects that people reported. It turns out that the dots vary greatly in shape and size depending on the parameters used in the stimulation. You can imagine that if you increase the current, the stimulation in the brain will spread further, hit more neurons and therefore provide a larger bright spot. The location of the electrode also determines the size of the dots. By influencing the various parameters, we looked at how this actually changes what people see.”
    Publicly Accessible
    “The simulator is currently being used for research in Nijmegen, where they are investigating the impact of eye movements. With this article we hope to offer other researchers the opportunity to use our simulation as well. We would like to emphasize that the simulator is publicly accessible to everyone, with the flexibility to make adjustments where necessary. It is even possible to optimize the simulation using AI, which can assist you in identifying the necessary stimulation for a specific image.”
    “We are now also using the simulator to give people an idea of where this research could go and what to expect when the first treatments are carried out in a few years. Using VR glasses we can simulate the current situation with 100 electrodes, which also highlights how limited vision through a prosthesis is: they may be able to find a door, but won’t have the ability to recognize facial expressions. Alternatively, we can show a situation with tens of thousands electrodes and what that will bring us when this technology is developed far enough.’ More

  • in

    Researchers use Hawk supercomputer and lean into imperfection to improve solar cell efficiency

    Since the turn of the century, Germany has made major strides in solar energy production. In 2000, the country generated less than one percent of its electricity with solar power, but by 2022, that figure had risen to roughly 11 percent. A combination of lucrative subsidies for homeowners and technological advances to bring down the costs of solar panels helped drive this growth.
    With global conflicts making oil and natural gas markets less reliable, solar power stands to play an even larger role in helping meet Germany’s energy needs in the years to come. While solar technology has come a long way in the last quarter century, the solar cells in contemporary solar panels still only operate at about 22 percent efficiency on average.
    In the interest of improving solar cell efficiency, a research team led by Prof. Wolf Gero Schmidt at the University of Paderborn has been using high-performance computing (HPC) resources at the High-Performance Computing Center Stuttgart (HLRS) to study how these cells convert light to electricity. Recently, the team has been using HLRS’s Hawk supercomputer to determine how designing certain strategic impurities in solar cells could improve performance.
    “Our motivation on this is two-fold: at our institute in Paderborn, we have been working for quite some time on a methodology to describe microscopically the dynamics of optically excited materials, and we have published a number of pioneering papers about that topic in recent years,” Schmidt said. “But recently, we got a question from collaborators at the Helmholtz Zentrum Berlin who were asking us to help them understand at a fundamental level how these cells work, so we decided to use our method and see what we could do.”
    Recently, the team used Hawk to simulate how excitons — a pairing of an optically exited electron and the electron “hole” it leaves behind — can be controlled and moved within solar cells so more energy is captured. In its research, the team made a surprising discovery: it found that certain defects to the system, introduced strategically, would improve exciton transfer rather than impede it. The team published its results in Physical Review Letters.
    Designing solar cells for more efficient energy conversion
    Most solar cells, much like many modern electronics, are primarily made of silicon. After oxygen, it is the second most abundant chemical element on Earth in terms of mass. Around 15 percent of our entire planet consists of silicon, including 25.8 percent of the Earth’s crust. The basic material for climate-friendly energy production is therefore abundant and available almost everywhere.

    However, this material does have certain drawbacks for capturing solar radiation and converting it into electricity. In traditional, silicon-based solar cells, light particles, called photons, transfer their energy to available electrons in the solar cell. The cell then uses those excited electrons to create an electrical current.
    The problem? High-energy photons provide far more energy than what can be transformed into electricity by silicon. Violet light photons, for instance, have about three electron volts (eV) of energy, but silicon is only able to convert about 1.1 eV of that energy into electricity. The rest of the energy is lost as heat, which is both a missed opportunity for capturing additional energy and reduces solar cell performance and durability.
    In recent years, scientists have started to look for ways to reroute or otherwise capture some of that excess energy. While several methods are being investigated, Schmidt’s team has focused on using a molecule-thin layer of tetracene, another organic semiconductor material, as the top layer of a solar cell.
    Unlike silicon, when tetracene receives a high-energy photon, it splits the resulting excitons into two lower-energy excitations in a process known as singlet fission. By placing a carefully designed interface layer between tetracene and silicon, the resulting low-energy excitons can be transferred from tetracene into silicon, where most of their energy can be converted into electricity.
    Utility in imperfection
    Whether using tetracene or another material to augment traditional solar cells, researchers have focused on trying to design the perfect interface between constituent parts of a solar cell to provide the best-possible conditions for exciton transfer.

    Schmidt and his team use ab initio molecular dynamics (AIMD) simulations to study how particles interact and move within a solar cell. With access to Hawk, the team is able to do computationally expensive calculations to observe how several hundred atoms and their electrons interact with one another. The team uses AIMD simulations to advance time at femtosecond intervals to understand how electrons interact with electron holes and other atoms in the system. Much like other researchers, the team sought to use its computational method to identify imperfections in the system and look for ways to improve on it.
    In search of the perfect interface, they found a surprise: that an imperfect interface might be better for exciton transfer. In an atomic system, atoms that are not fully saturated, meaning they are not completely bonded to other atoms, have so-called “dangling bonds.” Researchers normally assume dangling bonds lead to inefficiencies in electronic interfaces, but in its AIMD simulations, the team found that silicon dangling bonds actually fostered additional exciton transfer across the interface.
    “Defect always implies that there is some unwanted thing in a system, but that is not really true in our case,” said Prof. Uwe Gerstmann, a University of Paderborn professor and collaborator on the project. “In semiconductor physics, we have already strategically used defects that we call donors or acceptors, which help us build diodes and transistors. So strategically, defects can certainly help us build up new kinds of technologies.”
    Dr. Marvin Krenz, a postdoctoral researcher at the University of Paderborn and lead author on the team’s paper, pointed out the contradiction in the team’s findings compared to the current state of solar cell research. “It is an interesting point for us that the current direction of the research was going toward designing ever-more perfect interfaces and to remove defects at all costs. Our paper might be interesting for the larger research community because it points out a different way to go when it comes to designing these systems,” he said.
    Armed with this new insight, the team now plans to use its future computing power to design interfaces that are perfectly imperfect, so to speak. Knowing that silicon dangling bonds can help foster this exciton transfer, the team wants to use AIMD to reliably design an interface with improved exciton transfer. For the team, the goal is not to design the perfect solar cell overnight, but to continue to make subsequent generations of solar technology better.
    “I feel confident that we will continue to gradually improve solar cell efficiency over time,” Schmidt said. “Over the last few decades, we have seen an average annual increase in efficiency of around 1% across the various solar cell architectures. Work such as the one we have carried out here suggests that further increases can be expected in the future. In principle, an increase in efficiency by a factor of 1.4 is possible through the consistent utilization of singlet fission.” More