More stories

  • in

    Strengthening electron-triggered light emission

    The way electrons interact with photons of light is a key part of many modern technologies, from lasers to solar panels to LEDs. But the interaction is inherently a weak one because of a major mismatch in scale: A wavelength of visible light is about 1,000 times larger than an electron, so the way the two things affect each other is limited by that disparity.
    Now, researchers at MIT and elsewhere have come up with an innovative way to make much stronger interactions between photons and electrons possible, in the process producing a hundredfold increase in the emission of light from a phenomenon called Smith-Purcell radiation. The finding has potential implications for both commercial applications and fundamental scientific research, although it will require more years of research to make it practical.
    The findings are reported today in the journal Nature, in a paper by MIT postdocs Yi Yang (now an assistant professor at the University of Hong Kong) and Charles Roques-Carmes, MIT professors Marin Soljačić and John Joannopoulos, and five others at MIT, Harvard University, and Technion-Israel Institute of Technology.
    In a combination of computer simulations and laboratory experiments, the team found that using a beam of electrons in combination with a specially designed photonic crystal — a slab of silicon on an insulator, etched with an array of nanometer-scale holes — they could theoretically predict stronger emission by many orders of magnitude than would ordinarily be possible in conventional Smith-Purcell radiation. They also experimentally recorded a one hundredfold increase in radiation in their proof-of-concept measurements.
    Unlike other approaches to producing sources of light or other electromagnetic radiation, the free-electron-based method is fully tunable — it can produce emissions of any desired wavelength, simply by adjusting the size of the photonic structure and the speed of the electrons. This may make it especially valuable for making sources of emission at wavelengths that are difficult to produce efficiently, including terahertz waves, ultraviolet light, and X-rays.
    The team has so far demonstrated the hundredfold enhancement in emission using a repurposed electron microscope to function as an electron beam source. But they say that the basic principle involved could potentially enable far greater enhancements using devices specifically adapted for this function.

    The approach is based on a concept called flatbands, which have been widely explored in recent years for condensed matter physics and photonics but have never been applied to affecting the basic interaction of photons and free electrons. The underlying principle involves the transfer of momentum from the electron to a group of photons, or vice versa. Whereas conventional light-electron interactions rely on producing light at a single angle, the photonic crystal is tuned in such a way that it enables the production of a whole range of angles.
    The same process could also be used in the opposite direction, using resonant light waves to propel electrons, increasing their velocity in a way that could potentially be harnessed to build miniaturized particle accelerators on a chip. These might ultimately be able to perform some functions that currently require giant underground tunnels, such as the 30-kilometer-wide Large Hadron Collider in Switzerland.
    “If you could actually build electron accelerators on a chip,” Soljačić says, “you could make much more compact accelerators for some of the applications of interest, which would still produce very energetic electrons. That obviously would be huge. For many applications, you wouldn’t have to build these huge facilities.”
    The new system could also potentially provide a highly controllable X-ray beam for radiotherapy purposes, Roques-Carmes says.
    And the system could be used to generate multiple entangled photons, a quantum effect that could be useful in the creation of quantum-based computational and communications systems, the researchers say. “You can use electrons to couple many photons together, which is a considerably hard problem if using a purely optical approach,” says Yang. “That is one of the most exciting future directions of our work.”
    Much work remains to translate these new findings into practical devices, Soljačić cautions. It may take some years to develop the necessary interfaces between the optical and electronic components and how to connect them on a single chip, and to develop the necessary on-chip electron source producing a continuous wavefront, among other challenges.
    “The reason this is exciting,” Roques-Carmes adds, “is because this is quite a different type of source.” While most technologies for generating light are restricted to very specific ranges of color or wavelength, and “it’s usually difficult to move that emission frequency. Here it’s completely tunable. Simply by changing the velocity of the electrons, you can change the emission frequency. … That excites us about the potential of these sources. Because they’re different, they offer new types of opportunities.”
    But, Soljačić concludes, “in order for them to become truly competitive with other types of sources, I think it will require some more years of research. I would say that with some serious effort, in two to five years they might start competing in at least some areas of radiation.”
    The research team also included Steven Kooi at MIT’s Institute for Soldier Nanotechnologies, Haoning Tang and Eric Mazur at Harvard University, Justin Beroz at MIT, and Ido Kaminer at Technion-Israel Institute of Technology. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the U.S. Air Force Office of Scientific Research, and the U.S. Office of Naval Research. More

  • in

    The interior design of our cells: Database of 200,000 cell images yields new mathematical framework to understand our cellular building blocks

    Working with hundreds of thousands of high-resolution images, the team at the Allen Institute for Cell Science, a division of the Allen Institute, put numbers on the internal organization of human cells — a biological concept that has to date proven exceptionally difficult to quantify.
    Through that work, the scientists also captured details about the rich variation in cell shape even among genetically identical cells grown under identical conditions. The team described their work in a paper published in the journal Nature today.
    “The way cells are organized tells us something about their behavior and identity,” said Susanne Rafelski, Ph.D., Deputy Director of the Allen Institute for Cell Science, who led the study along with Senior Scientist Matheus Viana, Ph.D. “What’s been missing from the field, as we all try to understand how cells change in health and disease, is a rigorous way to deal with this kind of organization. We haven’t yet tapped into that information.”
    This study provides a roadmap for biologists to understand organization of different kinds of cells in a measurable, quantitative way, Rafelski said. It also reveals some key organizational principles of the cells the Allen Institute team studies, which are known as human induced pluripotent stem cells.
    Understanding how cells organize themselves under healthy conditions — and the full range of variability contained within “normal” — can help scientists better understand what goes wrong in disease. The image dataset, genetically engineered stem cells, and code that went into this study are all publicly available for other scientists in the community to use.
    “Part of what makes cell biology seem intractable is the fact that every cell looks different, even when they are the same type of cell. This study from the Allen Institute shows that this same variability that has long plagued the field is, in fact, an opportunity to study the rules by which a cell is put together,” said Wallace Marshall, Ph.D., Professor of Biochemistry and Biophysics at the University of California, San Francisco, and a member of the Allen Institute for Cell Science’s Scientific Advisory Board. “This approach is generalizable to virtually any cell, and I expect that many others will adopt the same methodology.”
    Computing the pear-ness of our cells

    In a body of work launched more than seven years ago, the Allen Institute team first built a collection of stem cells genetically engineered to light up different internal structures under a fluorescent microscope. With cell lines in hand that label 25 individual structures, the scientists then captured high-resolution, 3D images of more than 200,000 different cells.
    All this to ask one seemingly straightforward question: How do our cells organize their interiors?
    Getting to the answer, it turned out, is really complex. Imagine setting up your office with hundreds of different pieces of furniture, all of which need to be readily accessed, and many of which need to move freely or interact depending on their task. Now imagine your office is a sac of liquid surrounded by a thin membrane, and many of those hundreds of pieces of furniture are even smaller bags of liquid. Talk about an interior design nightmare.
    The scientists wanted to know: How do all those tiny cellular structures arrange themselves compared to each other? Is “structure A” always in the same place, or is it random?
    The team ran into a challenge comparing the same structure between two different cells. Even though the cells under study were genetically identical and reared in the same laboratory environment, their shapes varied substantially. The scientists realized that it would be impossible to compare the position of structure A in two different cells if one cell was short and blobby and the other was long and pear-shaped. So they put numbers on those stubby blobs and elongated pears.

    Using computational analyses, the team developed what they call a “shape space” that objectively describes each stem cell’s external shape. That shape space includes eight different dimensions of shape variation, things like height, volume, elongation, and the aptly described “pear-ness” and “bean-ness.” The scientists could then compare apples to apples (or beans to beans), looking at organization of cellular structures inside all similarly shaped cells.
    “We know that in biology, shape and function are interrelated, and understanding cell shape is important to understand how the cells function,” Viana said. “We’ve come up with a framework that allows us to measure a cell’s shape, and the moment you do that you can find cells that are similar shapes, and for those cells you can then look inside and see how everything is arranged.”
    Strict organization
    When they looked at the position of the 25 highlighted structures, comparing those structures in groups of cells with similar shapes, they found that all the cells set up shop in remarkably similar ways. Despite the massive variations in cell shape, their internal organization was strikingly consistent.
    If you’re looking at how thousands of white-collar workers arrange their furniture in a high-rise office building, it’s as if every worker put their desk smack in the middle of their office and their filing cabinet precisely in the far-left corner, no matter the size or shape of the office. Now say you found one office with a filing cabinet thrown on the floor and papers strewn everywhere — that might tell you something about the state of that particular office and its occupant.
    The same goes for cells. Finding deviations from the normal state of affairs could give scientists important information about how cells change when they transition from stationary to mobile, are getting ready to divide, or about what goes wrong at the microscopic level in disease. The researchers looked at two variations in their dataset — cells at the edges of colonies of cells, and cells that were undergoing division to create new daughter cells, a process known as mitosis. In these two states, the scientists were able to find changes in internal organization correlating to the cells’ different environments or activities.
    “This study brings together everything we’ve been doing at the Allen Institute for Cell Science since the institute was launched,” said Ru Gunawardane, Ph.D., Executive Director of the Allen Institute for Cell Science. “We built all of this from scratch, including the metrics to measure and compare different aspects of how cells are organized. What I’m truly excited about is how we and others in the community can now build on this and ask questions about cell biology that we could never ask before.” More

  • in

    A step towards solar fuels out of thin air

    A device that can harvest water from the air and provide hydrogen fuel — entirely powered by solar energy — has been a dream for researchers for decades. Now, EPFL chemical engineer Kevin Sivula and his team have made a significant step towards bringing this vision closer to reality. They have developed an ingenious yet simple system that combines semiconductor-based technology with novel electrodes that have two key characteristics: they are porous, to maximize contact with water in the air; and transparent, to maximize sunlight exposure of the semiconductor coating. When the device is simply exposed to sunlight, it takes water from the air and produces hydrogen gas. The results are published on 4 January 2023 in Advanced Materials.
    What’s new? It’s their novel gas diffusion electrodes, which are transparent, porous and conductive, enabling this solar-powered technology for turning water — in its gas state from the air — into hydrogen fuel.
    “To realize a sustainable society, we need ways to store renewable energy as chemicals that can be used as fuels and feedstocks in industry. Solar energy is the most abundant form of renewable energy, and we are striving to develop economically-competitive ways to produce solar fuels,” says Sivula of EPFL’s Laboratory for Molecular Engineering of Optoelectronic Nanomaterials and principal investigator of the study.
    Inspiration from a plant’s leaf
    In their research for renewable fossil-free fuels, the EPFL engineers in collaboration with Toyota Motor Europe, took inspiration from the way plants are able to convert sunlight into chemical energy using carbon dioxide from the air. A plant essentially harvests carbon dioxide and water from its environment, and with the extra boost of energy from sunlight, can transform these molecules into sugars and starches, a process known as photosynthesis. The sunlight’s energy is stored in the form of chemical bonds inside of the sugars and starches.
    The transparent gas diffusion electrodes developed by Sivula and his team, when coated with a light harvesting semiconductor material, indeed act like an artificial leaf, harvesting water from the air and sunlight to produce hydrogen gas. The sunlight’s energy is stored in the form of hydrogen bonds.

    Instead of building electrodes with traditional layers that are opaque to sunlight, their substrate is actually a 3-dimensional mesh of felted glass fibers.
    Marina Caretti, lead author of the work, says, “Developing our prototype device was challenging since transparent gas-diffusion electrodes have not been previously demonstrated, and we had to develop new procedures for each step. However, since each step is relatively simple and scalable, I think that our approach will open new horizons for a wide range of applications starting from gas diffusion substrates for solar-driven hydrogen production.”
    From liquid water to humidity in the air
    Sivula and other research groups have previously shown that it is possible to perform artificial photosynthesis by generating hydrogen fuel from liquid water and sunlight using a device called a photoelectrochemical (PEC) cell. A PEC cell is generally known as a device that uses incident light to stimulate a photosensitive material, like a semiconductor, immersed in liquid solution to cause a chemical reaction. But for practical purposes, this process has its disadvantages e.g. it is complicated to make large-area PEC devices that use liquid.
    Sivula wanted to show that the PEC technology can be adapted for harvesting humidity from the air instead, leading to the development of their new gas diffusion electrode. Electrochemical cells (e.g. fuel cells) have already been shown to work with gases instead of liquids, but the gas diffusion electrodes used previously are opaque and incompatible with the solar-powered PEC technology.

    Now, the researchers are focusing their efforts into optimizing the system. What is the ideal fiber size? The ideal pore size? The ideal semiconductors and membrane materials? These are questions that are being pursued in the EU Project “Sun-to-X,” which is dedicated to advance this technology, and develop new ways to convert hydrogen into liquid fuels.
    Making transparent, gas-diffusion electrodes
    In order to make transparent gas diffusion electrodes, the researchers start with a type of glass wool, which is essentially quartz (also known as silicon oxide) fibers and process it into felt wafers by fusing the fibers together at high temperature. Next, the wafer is coated with a transparent thin film of fluorine-doped tin oxide, known for its excellent conductivity, robustness and ease to scale-up. These first steps result in a transparent, porous, and conducting wafer, essential for maximizing contact with the water molecules in the air and letting photons through. The wafer is then coated again, this time with a thin-film of sunlight-absorbing semiconductor materials. This second thin coating still lets light through, but appears opaque due to the large surface area of the porous substrate. As is, this coated wafer can already produce hydrogen fuel once exposed to sunlight.
    The scientists went on to build a small chamber containing the coated wafer, as well as a membrane for separating the produced hydrogen gas for measurement. When their chamber is exposed to sunlight under humid conditions, hydrogen gas is produced, achieving what the scientists set out to do, showing that the concept of a transparent gas- diffusion electrode for solar-powered hydrogen gas production can be achieved.
    While the scientists did not formally study the solar-to-hydrogen conversion efficiency in their demonstration, they acknowledge that it is modest for this prototype, and currently less than can be achieved in liquid-based PEC cells. Based on the materials used, the maximum theoretical solar-to-hydrogen conversion efficiency of the coated wafer is 12%, whereas liquid cells have been demonstrated up to 19% efficient. More

  • in

    Scientists develop a cool new method of refrigeration

    Adding salt to a road before a winter storm changes when ice will form. Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have applied this basic concept to develop a new method of heating and cooling. The technique, which they have named “ionocaloric cooling,” is described in a paper published Dec. 23 in the journal Science.
    Ionocaloric cooling takes advantage of how energy, or heat, is stored or released when a material changes phase — such as changing from solid ice to liquid water. Melting a material absorbs heat from the surroundings, while solidifying it releases heat. The ionocaloric cycle causes this phase and temperature change through the flow of ions (electrically charged atoms or molecules) which come from a salt.
    Researchers hope that the method could one day provide efficient heating and cooling, which accounts for more than half of the energy used in homes, and help phase out current “vapor compression” systems, which use gases with high global warming potential as refrigerants. Ionocaloric refrigeration would eliminate the risk of such gases escaping into the atmosphere by replacing them with solid and liquid components.
    “The landscape of refrigerants is an unsolved problem: No one has successfully developed an alternative solution that makes stuff cold, works efficiently, is safe, and doesn’t hurt the environment,” said Drew Lilley, a graduate research assistant at Berkeley Lab and PhD candidate at UC Berkeley who led the study. “We think the ionocaloric cycle has the potential to meet all those goals if realized appropriately.”
    Finding a solution that replaces current refrigerants is essential for countries to meet climate change goals, such as those in the Kigali Amendment (accepted by 145 parties, including the United States in October 2022). The agreement commits signatories to reduce production and consumption of hydrofluorocarbons (HFCs) by at least 80% over the next 25 years. HFCs are powerful greenhouse gases commonly found in refrigerators and air conditioning systems, and can trap heat thousands of times as effectively as carbon dioxide.
    The new ionocaloric cycle joins several other kinds of “caloric” cooling in development. Those techniques use different methods — including magnetism, pressure, stretching, and electric fields — to manipulate solid materials so that they absorb or release heat. Ionocaloric cooling differs by using ions to drive solid-to-liquid phase changes. Using a liquid has the added benefit of making the material pumpable, making it easier to get heat in or out of the system — something solid-state cooling has struggled with.

    Lilley and corresponding author Ravi Prasher, a research affiliate in Berkeley Lab’s Energy Technologies Area and adjunct professor in mechanical engineering at UC Berkeley, laid out the theory underlying the ionocaloric cycle. They calculated that it has the potential to compete with or even exceed the efficiency of gaseous refrigerants found in the majority of systems today.
    They also demonstrated the technique experimentally. Lilley used a salt made with iodine and sodium, alongside ethylene carbonate, a common organic solvent used in lithium-ion batteries.
    “There’s potential to have refrigerants that are not just GWP [global warming potential]-zero, but GWP-negative,” Lilley said. “Using a material like ethylene carbonate could actually be carbon-negative, because you produce it by using carbon dioxide as an input. This could give us a place to use CO2 from carbon capture.”
    Running current through the system moves the ions, changing the material’s melting point. When it melts, the material absorbs heat from the surroundings, and when the ions are removed and the material solidifies, it gives heat back. The first experiment showed a temperature change of 25 degrees Celsius using less than one volt, a greater temperature lift than demonstrated by other caloric technologies.
    “There are three things we’re trying to balance: the GWP of the refrigerant, energy efficiency, and the cost of the equipment itself,” Prasher said. “From the first try, our data looks very promising on all three of these aspects.”
    While caloric methods are often discussed in terms of their cooling power, the cycles can also be harnessed for applications such as water heating or industrial heating. The ionocaloric team is continuing work on prototypes to determine how the technique might scale to support large amounts of cooling, improve the amount of temperature change the system can support, and improve the efficiency.
    “We have this brand-new thermodynamic cycle and framework that brings together elements from different fields, and we’ve shown that it can work,” Prasher said. “Now, it’s time for experimentation to test different combinations of materials and techniques to meet the engineering challenges.”
    Lilley and Prasher have received a provisional patent for the ionocaloric refrigeration cycle, and the technology is now available for licensing.
    This work was supported by the DOE’s Energy Efficiency and Renewable Energy Building Technologies Program. More

  • in

    Researchers demonstrate new sensors by creating novel health monitoring, machine interface devices

    Researchers at North Carolina State University have developed a stretchable strain sensor that has an unprecedented combination of sensitivity and range, allowing it to detect even minor changes in strain with greater range of motion than previous technologies. The researchers demonstrated the sensor’s utility by creating new health monitoring and human-machine interface devices.
    Strain is a measurement of how much a material deforms from its original length. For example, if you stretched a rubber band to twice its original length, its strain would be 100%.
    “And measuring strain is useful in many applications, such as devices that measure blood pressure and technologies that track physical movement,” says Yong Zhu, corresponding author of a paper on the work and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State.
    “But to date there’s been a trade-off. Strain sensors that are sensitive — capable of detecting small deformations — cannot be stretched very far. On the other hand, sensors that can be stretched to greater lengths are typically not very sensitive. The new sensor we’ve developed is both sensitive and capable of withstanding significant deformation,” says Zhu. “An additional feature is that the sensor is highly robust even when over-strained, meaning it is unlikely to break when the applied strain accidently exceeds the sensing range.”
    The new sensor consists of a silver nanowire network embedded in an elastic polymer. The polymer features a pattern of parallel cuts of a uniform depth, alternating from either side of the material: one cut from the left, followed by one from the right, followed by one from the left, and so on.
    “This feature — the patterned cuts — is what enables a greater range of deformation without sacrificing sensitivity,” says Shuang Wu, who is first author of the paper and a recent Ph.D. graduate at NC State.
    The sensor measures strain by measuring changes in electrical resistance. As the material stretches, resistance increases. The cuts in the surface of the sensor are perpendicular to the direction that it is stretched. This does two things. First, the cuts allow the sensor to deform significantly. Because the cuts in the surface pull open, creating a zigzag pattern, the material can withstand substantial deformation without reaching the breaking point. Second, when the cuts pull open, this forces the electrical signal to travel further, traveling up and down the zigzag.
    “To demonstrate the sensitivity of the new sensors, we used them to create new wearable blood pressure devices,” Zhu says. “And to demonstrate how far the sensors can be deformed, we created a wearable device for monitoring motion in a person’s back, which has utility for physical therapy.”
    “We have also demonstrated a human-machine interface,” Wu says. “Specifically, we used the sensor to create a three-dimensional touch controller that can be used to control a video game.”
    “The sensor can be easily incorporated into existing wearable materials such as fabrics and athletic tapes, convenient for practical applications,” Zhu says. “And all of this is just scratching the surface. We think there will be a range of additional applications as we continue working with this technology.”
    The work was done with support from the National Science Foundation, under grant number 2122841; the National Institutes of Health, under grant number R01HD108473; and the U.S. Department of Defense, under grant number W81XWH-21-1-0185.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Self-powered, printable smart sensors created from emerging semiconductors could mean cheaper, greener Internet of Things

    Creating smart sensors to embed in our everyday objects and environments for the Internet of Things (IoT) would vastly improve daily life — but requires trillions of such small devices. Simon Fraser University professor Vincenzo Pecunia believes that emerging alternative semiconductors that are printable, low-cost and eco-friendly could lead the way to a cheaper and more sustainable IoT.
    Leading a multinational team of top experts in various areas of printable electronics, Pecunia has identified key priorities and promising avenues for printable electronics to enable self-powered, eco-friendly smart sensors. His forward-looking insights are outlined in his paper published on Dec. 28 in Nature Electronics.
    “Equipping everyday objects and environments with intelligence via smart sensors would allow us to make more informed decisions as we go about in our daily lives,” says Pecunia. “Conventional semiconductor technologies require complex, energy-intensity, and expensive processing, but printable semiconductors can deliver electronics with a much lower carbon footprint and cost, since they can be processed by printing or coating, which require much lower energy and materials consumption.”
    Pecunia says making printable electronics that can work using energy harvested from the environment — from ambient light or ubiquitous radiofrequency signals, for example — could be the answer.
    “Our analysis reveals that a key priority is to realize printable electronics with as small a material set as possible to streamline their fabrication process, thus ensuring the straightforward scale-up and low cost of the technology,” says Pecunia. The article outlines a vision of printed electronics that could also be powered by ubiquitous mobile signals through innovative low-power approaches — essentially allowing smart sensors to charge out of thin air.
    “Based on recent breakthroughs, we anticipate that printable semiconductors could play a key role in realizing the full sustainability potential of the Internet of Things by delivering self-powered sensors for smart homes, smart buildings and smart cities, as well as for manufacturing and industry.”
    Pecunia has already achieved numerous breakthroughs towards self-powered printable smart sensors, demonstrating printed electronics with record-low power dissipation and the first-ever printable devices powered by ambient light via tiny printable solar cells.
    His research group at SFU’s School of Sustainable Energy Engineering focuses on the development of innovative approaches to eco-friendly, printable solar cells and electronics for use in next-generation smart devices.
    Pecunia notes that the semiconductor technologies being developed by his group could potentially allow the seamless integration of electronics, sensors, and energy harvesters at the touch of a ‘print’ button at single production sites — thereby reducing the carbon footprint, supply chain issues and energetic costs associated with long-distance transport in conventional electronics manufacturing.
    “Due to their unique manufacturability, printable semiconductors also represent a unique opportunity for Canada,” he says. “Not only to become a global player in next-generation, eco-friendly electronics, but also to overcome its reliance on electronics from faraway countries and the associated supply chain and geo-political issues.
    “Our hope is that these semiconductors will deliver eco-friendly technologies for a future of clean energy generation and sustainable living, which are key to achieving Canada’s net-zero goal.”
    Story Source:
    Materials provided by Simon Fraser University. Original written by Marianne Meadahl. Note: Content may be edited for style and length. More

  • in

    Researchers discover new process to create freestanding membranes of 'smart' materials

    A University of Minnesota Twin Cities-led team of scientists and engineers has developed a new method for making thin films of perovskite oxide semiconductors, a class of “smart” materials with unique properties that can change in response to stimuli like light, magnetic fields, or electric fields.
    The discovery will allow researchers to harness these properties and even combine them with other emerging nano-scale materials to make better devices such as sensors, smart textiles, and flexible electronics.
    The paper is published in Science Advances.
    Producing materials in thin-film form makes them easier to integrate into smaller components for electronic devices. Many thin films are made using a technique called epitaxy, which consists of placing atoms of a material on a substrate, or a template of sorts, to create a thin sheet of material, one atomic layer at a time. However, most thin films created via epitaxy are “stuck” on their host substrate, limiting their uses. If the thin film is detached from the substrate to become a freestanding membrane, it becomes much more functional.
    The University of Minnesota-led team has found a new way to successfully create a membrane of a particular metal oxide — strontium titanate — and their method circumvents several issues that have plagued the synthesis of freestanding metal oxide films in the past.
    “We have created a process where we can make a freestanding membrane of virtually any oxide material, exfoliate it, and then transfer it onto any subject of interest we want,” said Bharat Jalan, a senior author on the paper and a professor and Shell Chair in the University of Minnesota Department of Chemical Engineering and Materials Science. “Now, we can benefit from the functionality of these materials by combining them with other nano-scale materials, which would enable a wide range of highly functional, highly efficient devices.”
    Making freestanding membranes of “smart” oxide materials is challenging because the atoms are bonded in all three dimensions, unlike in a two-dimensional material, such as graphene. One method of making membranes in oxide materials is using a technique called remote epitaxy, which uses a layer of graphene as an intermediary between the substrate and the thin-film material.

    This approach allows the thin-film oxide material to form a thin film and be peeled off, like a piece of tape, from the substrate, creating a freestanding membrane. However, the biggest barrier to using this method with metal oxides is that the oxygen in the material oxidizes the graphene on contact, ruining the sample.
    Using hybrid molecular beam epitaxy, a technique pioneered by Jalan’s lab at the University of Minnesota, the researchers were able to get around this issue by using titanium that was already bonded to oxygen. Plus, their method allows for automatic stoichiometric control, meaning they can automatically control the composition.
    “We showed for the first time, and conclusively by doing several experiments, that we have a new method which allows us to make complex oxide while ensuring that graphene is not oxidized. That’s a major milestone in synthesis science,” Jalan said. “And, we now have a way to make these complex oxide membranes with an automatic stoichiometric control. No one has been able to do that.”
    The materials scientists on Jalan’s team worked closely with engineering researchers in University of Minnesota Department of Electrical and Computer Engineering Professor Steven Koester’s lab, which focuses on making 2D materials.
    “These complex oxides are a broad class of materials that have a lot of really important innate functions to them,” said Koester, also a senior author of the study and the director of the Minnesota Nano Center at the University of Minnesota Twin Cities. “Now, we can think about using them to make extremely small transistors for electronic devices, and in a wide array of other applications including flexible sensors, smart textiles, and non-volatile memories.”
    The research was funded by the U.S. Department of Energy, the Air Force Office of Scientific Research, and the National Science Foundation.
    In addition to Jalan and Koester, the research team included University of Minnesota Department of Chemical Engineering and Materials Science researchers Hyojin Yoon, Tristan Truttmann, Fengdeng Liu, and Sooho Choo; University of Minnesota Department of Electrical and Computer Engineering researcher Qun Su; Pacific Northwest National Laboratory researchers Bethany Matthews, Mark Bowden, Steven Spurgeon, and Scott Chambers; and University of Wisconsin-Madison researchers Vivek Saraswat, Sebastian Manzo, Michael Arnold, and Jason Kawasaki. More

  • in

    Human brain organoids implanted into mouse cortex respond to visual stimuli for first time

    A team of engineers and neuroscientists has demonstrated for the first time that human brain organoids implanted in mice have established functional connectivity to the animals’ cortex and responded to external sensory stimuli. The implanted organoids reacted to visual stimuli in the same way as surrounding tissues, an observation that researchers were able to make in real time over several months thanks to an innovative experimental setup that combines transparent graphene microelectrode arrays and two-photon imaging.
    The team, led by Duygu Kuzum, a faculty member in the University of California San Diego Department of Electrical and Computer Engineering, details their findings in the Dec. 26 issue of the journal Nature Communications. Kuzum’s team collaborated with researchers from Anna Devor’s lab at Boston University; Alysson R. Muotri’s lab at UC San Diego; and Fred H. Gage’s lab at the Salk Institute.
    Human cortical organoids are derived from human induced pluripotent stem cells, which are usually derived themselves from skin cells. These brain organoids have recently emerged as promising models to study the development of the human brain, as well as a range of neurological conditions.
    But until now, no research team had been able to demonstrate that human brain organoids implanted in the mouse cortex were able to share the same functional properties and react to stimuli in the same way. This is because the technologies used to record brain function are limited, and are generally unable to record activity that lasts just a few milliseconds.
    The UC San Diego-led team was able to solve this problem by developing experiments that combine microelectrode arrays made from transparent graphene, and two-photon imaging, a microscopy technique that can image living tissue up to one millimeter in thickness.
    “No other study has been able to record optically and electrically at the same time,” said Madison Wilson, the paper’s first author and a Ph.D. student in Kuzum’s research group at UC San Diego. “Our experiments reveal that visual stimuli evoke electrophysiological responses in the organoids, matching the responses from the surrounding cortex.”
    The researchers hope that this combination of innovative neural recording technologies to study organoids will serve as a unique platform to comprehensively evaluate organoids as models for brain development and disease, and investigate their use as neural prosthetics to restore function to lost, degenerated or damaged brain regions. More