More stories

  • in

    A soft, stimulating scaffold supports brain cell development ex vivo

    Brain-computer interfaces (BCIs) are a hot topic these days, with companies like Neuralink racing to create devices that connect humans’ brains to machines via tiny implanted electrodes. The potential benefits of BCIs range from improved monitoring of brain activity in patients with neurological problems to restoring vision in blind people to allowing humans to control machines using only our minds. But a major hurdle for the development of these devices is the electrodes themselves — they must conduct electricity, so nearly all of them are made of metal. Metals are not the most brain-friendly materials, as they are hard, rigid, and don’t replicate the physical environment in which brain cells typically grow.
    That problem now has a solution in a new type of electrically conductive hydrogel scaffold developed at the Wyss Institute at Harvard University, Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), and MIT. Not only does the scaffold mimic the soft, porous conditions of brain tissue, it supported the growth and differentiation of human neural progenitor cells (NPCs) into multiple different brain cell types for up to 12 weeks. The achievement is reported in Advanced Healthcare Materials.
    “This conductive, hydrogel-based scaffold has great potential. Not only can it be used to study the formation of human neural networks in vitro, it could also enable the creation of implantable biohybrid BCIs that more seamlessly integrate with a patient’s brain tissue, improving their performance and decreasing risk of injury,” said first author Christina Tringides, Ph.D., a former graduate student at the Wyss and SEAS who is now a Postdoctoral Fellow at ETH Zürich.
    Out of one, many
    Tringides and her team created their first hydrogel-based electrode in 2021, driven by the desire to make softer electrodes that could “flow” to hug the brain’s natural curves, nooks, and crannies. While the team demonstrated that their electrode was highly compatible with brain tissue, they knew that the most compatible substance for living cells is other cells. They decided to try to integrate living brain cells into the electrode itself, which could potentially allow an implanted electrode to transmit electrical impulses to a patient’s brain via more natural cell-cell contact.
    To make their conductive hydrogel a more comfortable place for cells to live, they added a freeze-drying step to the manufacturing process. The ice crystals that formed during the freeze-drying forced the hydrogel material to concentrate in the spaces around the crystals. When the ice crystals evaporated, they left behind pores surrounded by the conductive hydrogel, forming a porous scaffold. This structure ensured that cells would have ample surface area on which to grow, and that the electrically conductive components would form a continuous pathway through the hydrogel, delivering impulses to all the cells.

    The researchers varied the recipes of their hydrogels to create scaffolds that were either viscoelastic (like Jell-O) or elastic (like a rubber band) and soft or stiff. They then cultured human neural progenitor cells (NPCs) on these scaffolds to see which combination of physical properties best supported neural cell growth and development.
    Cells grown on gels that were viscoelastic and softer formed networks of lattice-like structures on the scaffold and differentiated into multiple other cell types after five weeks. Cells that were cultured on elastic gels, in contrast, had formed clumps that were largely composed of undifferentiated NPCs. The team also varied the amount of conductive materials within the hydrogel material to see how that affected neural growth and development. The more conductive a scaffold was, the more the cells formed branching networks (as they do in vivo) rather than clumps.
    The researchers then analyzed the different cell types that had developed within their hydrogel scaffolds. They found that astrocytes, which support neurons both physically and metabolically, formed their characteristic long projections when grown on viscoelastic gels vs. elastic gels, and there were significantly more of them present when the viscoelastic gels contained more conductive material. Oligodendrocytes, which create the myelin sheath that insulates the axons of neurons, were also present in the scaffolds. There was more total myelin and longer myelinated segments on viscoelastic gels than on elastic gels, and the thickness of the myelin increased when there was more conductive material present in the gels.
    The pièce de (electrical) résistance
    Finally, the team applied electrical stimulation to the living human cells via the conductive materials within their hydrogel scaffold to see how that impacted cell growth. The cells were pulsed with electricity for 15 minutes at a time, either daily or every other day. After eight days, the scaffolds that had been pulsed daily had very few living cells, while those that had been pulsed every other day were full of living cells throughout the scaffold.

    Following this stimulation period, the cells were left in the scaffolds for a total of 51 days. The few cells left in the scaffolds that had been stimulated daily did not differentiate into other cell types, while the every-other-day scaffolds had highly differentiated neurons and astrocytes with long protrusions. The variation in electrical impulses tested did not seem to have an effect on the amount of myelin present in the gels.
    “The successful differentiation of human NPCs into multiple types of brain cells within our scaffolds is confirmation that the conductive hydrogel provides them the right kind of environment in which to grow in vitro,” said senior author Dave Mooney, Ph.D., a Core Faculty member at the Wyss Institute. “It was especially exciting to see myelination on the neurons’ axons, as that has been an ongoing challenge to replicate in living models of the brain.” Mooney is also the Robert P. Pinkas Family Professor of Bioengineering at SEAS.
    Tringides is continuing work on the conductive hydrogel scaffolds, with plans to further investigate how various types of electrical stimulation could affect different cell types, and to develop a more comprehensive in vitro model. She hopes that this technology will one day enable the creation of devices that help restore function in human patients who are suffering from neurological and physiological problems.
    “This work represents a major advance by creating an in vitro microenvironment with the right physical, chemical, and electrical properties to support the growth and specialization of human brain cells. This model may be used to speed up the process of finding effective treatments for neurological diseases, in addition to opening up an entirely new approach to create more effective electrodes and brain-machine interfaces that seamlessly integrate with neuronal tissues. We’re excited to see where this innovative melding of materials science, biomechanics, and tissue engineering leads in the future,” said the Wyss Institute’s Founding Director Don Ingber, M.D., Ph.D. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and the Hansjörg Wyss Professor of Bioinspired Engineering at SEAS.
    Additional authors include Marjolaine Boulingre from SEAS, Andrew Khalil from the Wyss Institute and the Whitehead Institute at MIT, Tenzin Lungjangwa from the Whitehead Institute, and Rudolf Jaenisch from the Whitehead Institute and MIT.
    This work was supported by the National Science Foundation under award nos. 1541959 and DMR-1420570 and a Graduate Research Fellowship Program grant, NIH grants RO1DE013033 5R01DE013349, NSF-MRSEC DMR-2011754, the Wyss Institute for Biologically Inspired Engineering at Harvard University, the EPFL WISH Foundation, and the Wellcome Leap HOPE project. More

  • in

    Why technology alone can't solve the digital divide

    For some communities, the digital divide remains even after they have access to computers and fast internet, new research shows.
    A study of the Bhutanese refugee community in Columbus found that even though more than 95% of the population had access to the internet, very few were using it to connect with local resources and online news.
    And the study, which was done during the height of the COVID-19 pandemic stay-at-home orders in Ohio, found that nearly three-quarters of respondents never used the internet for telehealth services.
    The results showed that the digital divide must be seen as more than just a technological problem, said Jeffrey Cohen, lead author of the study and professor of anthropology at The Ohio State University.
    “We can’t just give people access to the internet and say the problem is solved,” Cohen said.
    “We found that there are social, cultural and environmental reasons that may prevent some communities from getting all the value they could out of internet access.”
    The study was published recently in the International Journal of Environmental Research and Public Health.

    For the study, researchers worked closely with members of the Bhutanese Community of Central Ohio, a nonprofit organization helping resettled Bhutanese refugees in the Columbus area.
    The study included a community survey of 493 respondents, some who were surveyed online and many more who were interviewed in person.
    While many of respondents lived in poverty — more than half had annual incomes below $35,000 — 95.4% said they had access to the internet.
    More than 9 out of 10 of those surveyed said access to digital technology was important, very important or extremely important to them.
    But most had a very limited view of how they could use the internet.

    “For just about everyone we interviewed, the internet was how you connected to your family, through apps like Facebook or WhatsApp,” Cohen said. “For many, that was nearly the only thing they used the internet for.”
    Findings revealed 82% connected to friends and family, and 68% used social media. All other uses were under 31%.
    Not surprisingly, older people, the less educated and those with poor English skills were less likely than others to use the internet.
    A common issue was that many refugees — especially the older and less educated — were just not comfortable online, the study found.
    “Of course, that is not just an issue with the Bhutanese. Many people in our country see the internet as just a place where their children or grandchildren play games, or attend classes,” he said.
    “They don’t see it as a place where they can access their health care or find resources to help them in their daily lives.”
    Language was another issue. While there was a local program to translate some important resources from English to Nepali, the most common language spoken by Bhutanese refugees, many respondents remarked that the translations were “mostly gibberish” and nearly impossible to understand, Cohen said.
    Even for those who spoke English, fewer than 25% described themselves as excellent speakers.
    “People had access to the internet, and this information was available to them, but they couldn’t use it. That is not a technological issue, but it is part of the digital divide,” he said.
    Because the study was done during the COVID-19 pandemic, one of the main areas of focus in the study was access to health care and information on COVID-19.
    Even though telehealth services were one of the main ways to access health care during the pandemic, about 73% said they never used the internet for that purpose.
    And COVID-19 was not the only health issue facing many of the those surveyed.
    “The Bhutanese community is at high risk for cardiometabolic diseases, such as cardiovascular disease and diabetes, and about 72% of those surveyed had one or more indications of these conditions,” Cohen said.
    “If they aren’t taking advantage of telehealth to consult with doctors, this could be putting them at greater risk.”
    Cohen said one key lesson from the study is that researchers must engage and partner with communities to ensure that proposed solutions to problems, including the digital divide, respond to local needs.
    The study was funded in part by the National Institutes of Health and the Ohio State Social Justice Program.
    Co-authors were Arati Maleku and Shambika Raut of the College of Social Work at Ohio State; Sudarshan Pyakurel of the Bhutanese Community of Central Ohio; Taku Suzuki of Denison University; and Francisco Alejandro Montiel Ishino of the National Institute of Environmental Health Sciences at NIH. More

  • in

    New approach to epidemic modeling could speed up pandemic simulations

    Simulations that help determine how a large-scale pandemic will spread can take weeks or even months to run. A recent study in PLOS Computational Biology offers a new approach to epidemic modeling that could drastically speed up the process.
    The study uses sparsification, a method from graph theory and computer science, to identify which links in a network are the most important for the spread of disease.
    By focusing on critical links, the authors found they could reduce the computation time for simulating the spread of diseases through highly complex social networks by 90% or more.
    “Epidemic simulations require substantial computational resources and time to run, which means your results might be outdated by the time you are ready to publish,” says lead author Alexander Mercier, a former Undergraduate Research Fellow at the Santa Fe Institute and now a Ph.D. student at the Harvard T.H. Chan School of Public Health. “Our research could ultimately enable us to use more complex models and larger data sets while still acting on a reasonable timescale when simulating the spread of pandemics such as COVID-19.”
    For the study, Mercier, with SFI researchers Samuel Scarpino and Cristopher Moore, used data from the U.S. Census Bureau to develop a mobility network describing how people across the country commute.
    Then, they applied several different sparsification methods to see if they could reduce the network’s density while retaining the overall dynamics of a disease spreading across the network.
    The most successful sparsification technique they found was effective resistance. This technique comes from computer science and is based on the total resistance between two endpoints in an electrical circuit. In the new study, effective resistance works by prioritizing the edges, or links, between nodes in the mobility network that are the most likely avenues of disease transmission while ignoring links that can be easily bypassed by alternate paths.
    “It’s common in the life sciences to naively ignore low-weight links in a network, assuming that they have a small probability of spreading a disease,” says Scarpino. “But as in the catchphrase ‘the strength of weak ties,’ even a low-weight link can be structurally important in an epidemic — for instance, if it connects two distant regions or distinct communities.”
    Using their effective resistance sparsification approach, the researchers created a network containing 25 million fewer edges — or about 7% of the original U.S. commuting network — while preserving overall epidemic dynamics.
    “Computer scientists Daniel Spielman and Nikhil Srivastava had shown that sparsification can simplify linear problems, but discovering that it works even for nonlinear, stochastic problems like an epidemic was a real surprise,” says Moore.
    While still in an early stage of development, the research not only helps reduce the computational cost of simulating large-scale pandemics but also preserves important details about disease spread, such as the probability of a specific census tract getting infected and when the epidemic is likely to arrive there.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Next-generation wireless technology may leverage the human body for energy

    While you may be just starting to reap the advantages of 5G wireless technology, researchers throughout the world are already working hard on the future: 6G. One of the most promising breakthroughs in 6G telecommunications is the possibility of Visible Light Communication (VLC), which is like a wireless version of fiberoptics, using flashes of light to transmit information. Now, a team of researchers at the University of Massachusetts Amherst has announced that they have invented a low-cost, innovative way to harvest the waste energy from VLC by using the human body as an antenna. This waste energy can be recycled to power an array of wearable devices, or even, perhaps, larger electronics.
    “VLC is quite simple and interesting,” says Jie Xiong, professor of information and computer sciences at UMass Amherst and the paper’s senior author. “Instead of using radio signals to send information wirelessly, it uses the light from LEDs that can turn on and off, up to one million times per second.” Part of the appeal of VLC is that the infrastructure is already everywhere — our homes, vehicles, streetlights and offices are all lit by LED bulbs, which could also be transmitting data. “Anything with a camera, like our smartphones, tablets or laptops, could be the receiver,” says Xiong.
    Previously, Xiong and first author Minhao Cui, a graduate student in information and computer sciences at UMass Amherst, showed that there’s significant “leakage” of energy in VLC systems, because the LEDs also emit “side-channel RF signals,” or radio waves. If this leaked RF energy could be harvested, then it could be put to use.
    The team’s first task was to design an antenna out of coiled copper wire to collect the leaked RF, which they did. But how to maximize the collection of energy?
    The team experimented with all sorts of design details, from the thickness of the wire to the number of times it was coiled, but they also noticed that the efficiency of the antenna varied according to what the antenna touched. They tried resting the coil on plastic, cardboard, wood and steel, as well as touching it to walls of different thicknesses, phones powered on and off and laptops. And then Cui got the idea to see what happened when the coil was in contact with a human body.
    Immediately, it became apparent that a human body is the best medium for amplifying the coil’s ability to collect leaked RF energy, up to ten times more than the bare coil alone.
    After much experimentation, the team came up with “Bracelet+,” a simple coil of copper wire worn as a bracelet on the upper forearm. While the design can be adapted for wearing as a ring, belt, anklet or necklace, the bracelet seemed to offer the right balance of power harvesting and wearability.
    “The design is cheap — less than fifty cents,” note the authors, whose paper won the Best Paper Award from the Association for Computing Machinery’s Conference on Embedded Networked Sensor Systems. “But Bracelet+ can reach up to micro-watts, enough to support many sensors such as on-body health monitoring sensors that require little power to work owing to their low sampling frequency and long sleep-mode duration.”
    “Ultimately,” says Xiong, “we want to be able to harvest waste energy from all sorts of sources in order to power future technology.”
    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Using machine learning to forecast amine emissions

    Global warming is partly due to the vast amount of carbon dioxide that we release, mostly from power generation and industrial processes, such as making steel and cement. For a while now, chemical engineers have been exploring carbon capture, a process that can separate carbon dioxide and store it in ways that keep it out of the atmosphere.
    This is done in dedicated carbon-capture plants, whose chemical process involves amines, compounds that are already used to capture carbon dioxide from natural gas processing and refining plants. Amines are also used in certain pharmaceuticals, epoxy resins, and dyes.
    The problem is that amines could also be potentially harmful to the environment as well as a health hazard, making it essential to mitigate their impact. This requires accurate monitoring and predicting of a plant’s amine emissions, which has proven to be no easy feat since carbon-capture plants are complex and differ from one another.
    A group of scientists has come up with a machine learning solution for forecasting amine emissions from carbon-capture plants using experimental data from a stress test at an actual plant in Germany. The work was led by the groups of Professor Berend Smit at EPFL’s School of Basic Sciences and Professor Susana Garcia at The Research Centre for Carbon Solutions of Heriot-Watt University in Scotland.
    “The experiments were done in Niederhau?en, on one of the largest coal-fired power plants in Germany,” says Berend Smit. “And from this power plant, a slipstream is sent into a carbon capture pilot plant, where the next generation of amine solution has been tested for over a year. But one of the outstanding issues is that amines can be emitted with flue gas, and these amine emissions need to be controlled.”
    Professor Susana Garcia, together with the plant’s owner, RWE, and TNO in the Netherlands, developed a stress test to study amine emissions under different process conditions. Professor Garcia describes how the test went: “We developed an experimental campaign to understand how and when amine emissions would be generated. But some of our experiments also caused interventions of the plant’s operators to ensure the plant was operating safely.”
    These interventions led to the question of how to interpret the data. Are the amine emissions the result of the stress test itself, or have the interventions of the operators indirectly affected the emissions? This was further complicated by our general lack of understanding of the mechanisms behind amine emissions. “In short, we had an expensive and successful campaign that showed that amine emissions can be a problem, but no tools to further analyze the data,” says Smit.
    He continues: “When Susana Garcia mentioned this to me, it sounded indeed like an impossible problem to solve. But she also mentioned that they measured everything every five minutes, collecting many data. And, if there is anybody in my group that can solve impossible problems with data, it is Kevin.” Kevin Maik Jablonka, a PhD student, developed a machine learning approach that turned the amine emissions puzzle into a pattern-recognition problem.
    “We wanted to know what the emissions would be if we did not have the stress test but only the operators’ interventions,” explains Smit. This is a similar issue as we can have in finance; for example, if you want to evaluate the effect of changes in the tax code, you would like to disentangle the effect of the tax code from, say, interventions caused by the crisis in Ukraine.”
    In the next step, Jablonka used powerful machine learning to predict future amine emissions from the plant’s data. He says: “With this model, we could predict the emissions caused by the interventions of the operators and then disentangle them from those induced by the stress test. In addition, we could use the model to run all kinds of scenarios on reducing these emissions.”
    The conclusion was described as “surprising.” As it turned out, the pilot plant had been designed for pure amine, but the measuring experiments were carried out on a mixture of two amines: 2-amino-2-methyl-1-propanol and piperazine (CESAR1). The scientists found out that those two amines actually respond in opposite ways: reducing the emission of one actually increases the emissions of the other.
    “I am very enthusiastic about the potential impact of this work; it is a completely new way of looking at a complex chemical process,” says Smit. “This type of forecasting is not something one can do with any of the conventional approaches, so it may change the way we operate chemical plants.” More

  • in

    Strengthening electron-triggered light emission

    The way electrons interact with photons of light is a key part of many modern technologies, from lasers to solar panels to LEDs. But the interaction is inherently a weak one because of a major mismatch in scale: A wavelength of visible light is about 1,000 times larger than an electron, so the way the two things affect each other is limited by that disparity.
    Now, researchers at MIT and elsewhere have come up with an innovative way to make much stronger interactions between photons and electrons possible, in the process producing a hundredfold increase in the emission of light from a phenomenon called Smith-Purcell radiation. The finding has potential implications for both commercial applications and fundamental scientific research, although it will require more years of research to make it practical.
    The findings are reported today in the journal Nature, in a paper by MIT postdocs Yi Yang (now an assistant professor at the University of Hong Kong) and Charles Roques-Carmes, MIT professors Marin Soljačić and John Joannopoulos, and five others at MIT, Harvard University, and Technion-Israel Institute of Technology.
    In a combination of computer simulations and laboratory experiments, the team found that using a beam of electrons in combination with a specially designed photonic crystal — a slab of silicon on an insulator, etched with an array of nanometer-scale holes — they could theoretically predict stronger emission by many orders of magnitude than would ordinarily be possible in conventional Smith-Purcell radiation. They also experimentally recorded a one hundredfold increase in radiation in their proof-of-concept measurements.
    Unlike other approaches to producing sources of light or other electromagnetic radiation, the free-electron-based method is fully tunable — it can produce emissions of any desired wavelength, simply by adjusting the size of the photonic structure and the speed of the electrons. This may make it especially valuable for making sources of emission at wavelengths that are difficult to produce efficiently, including terahertz waves, ultraviolet light, and X-rays.
    The team has so far demonstrated the hundredfold enhancement in emission using a repurposed electron microscope to function as an electron beam source. But they say that the basic principle involved could potentially enable far greater enhancements using devices specifically adapted for this function.

    The approach is based on a concept called flatbands, which have been widely explored in recent years for condensed matter physics and photonics but have never been applied to affecting the basic interaction of photons and free electrons. The underlying principle involves the transfer of momentum from the electron to a group of photons, or vice versa. Whereas conventional light-electron interactions rely on producing light at a single angle, the photonic crystal is tuned in such a way that it enables the production of a whole range of angles.
    The same process could also be used in the opposite direction, using resonant light waves to propel electrons, increasing their velocity in a way that could potentially be harnessed to build miniaturized particle accelerators on a chip. These might ultimately be able to perform some functions that currently require giant underground tunnels, such as the 30-kilometer-wide Large Hadron Collider in Switzerland.
    “If you could actually build electron accelerators on a chip,” Soljačić says, “you could make much more compact accelerators for some of the applications of interest, which would still produce very energetic electrons. That obviously would be huge. For many applications, you wouldn’t have to build these huge facilities.”
    The new system could also potentially provide a highly controllable X-ray beam for radiotherapy purposes, Roques-Carmes says.
    And the system could be used to generate multiple entangled photons, a quantum effect that could be useful in the creation of quantum-based computational and communications systems, the researchers say. “You can use electrons to couple many photons together, which is a considerably hard problem if using a purely optical approach,” says Yang. “That is one of the most exciting future directions of our work.”
    Much work remains to translate these new findings into practical devices, Soljačić cautions. It may take some years to develop the necessary interfaces between the optical and electronic components and how to connect them on a single chip, and to develop the necessary on-chip electron source producing a continuous wavefront, among other challenges.
    “The reason this is exciting,” Roques-Carmes adds, “is because this is quite a different type of source.” While most technologies for generating light are restricted to very specific ranges of color or wavelength, and “it’s usually difficult to move that emission frequency. Here it’s completely tunable. Simply by changing the velocity of the electrons, you can change the emission frequency. … That excites us about the potential of these sources. Because they’re different, they offer new types of opportunities.”
    But, Soljačić concludes, “in order for them to become truly competitive with other types of sources, I think it will require some more years of research. I would say that with some serious effort, in two to five years they might start competing in at least some areas of radiation.”
    The research team also included Steven Kooi at MIT’s Institute for Soldier Nanotechnologies, Haoning Tang and Eric Mazur at Harvard University, Justin Beroz at MIT, and Ido Kaminer at Technion-Israel Institute of Technology. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the U.S. Air Force Office of Scientific Research, and the U.S. Office of Naval Research. More

  • in

    The interior design of our cells: Database of 200,000 cell images yields new mathematical framework to understand our cellular building blocks

    Working with hundreds of thousands of high-resolution images, the team at the Allen Institute for Cell Science, a division of the Allen Institute, put numbers on the internal organization of human cells — a biological concept that has to date proven exceptionally difficult to quantify.
    Through that work, the scientists also captured details about the rich variation in cell shape even among genetically identical cells grown under identical conditions. The team described their work in a paper published in the journal Nature today.
    “The way cells are organized tells us something about their behavior and identity,” said Susanne Rafelski, Ph.D., Deputy Director of the Allen Institute for Cell Science, who led the study along with Senior Scientist Matheus Viana, Ph.D. “What’s been missing from the field, as we all try to understand how cells change in health and disease, is a rigorous way to deal with this kind of organization. We haven’t yet tapped into that information.”
    This study provides a roadmap for biologists to understand organization of different kinds of cells in a measurable, quantitative way, Rafelski said. It also reveals some key organizational principles of the cells the Allen Institute team studies, which are known as human induced pluripotent stem cells.
    Understanding how cells organize themselves under healthy conditions — and the full range of variability contained within “normal” — can help scientists better understand what goes wrong in disease. The image dataset, genetically engineered stem cells, and code that went into this study are all publicly available for other scientists in the community to use.
    “Part of what makes cell biology seem intractable is the fact that every cell looks different, even when they are the same type of cell. This study from the Allen Institute shows that this same variability that has long plagued the field is, in fact, an opportunity to study the rules by which a cell is put together,” said Wallace Marshall, Ph.D., Professor of Biochemistry and Biophysics at the University of California, San Francisco, and a member of the Allen Institute for Cell Science’s Scientific Advisory Board. “This approach is generalizable to virtually any cell, and I expect that many others will adopt the same methodology.”
    Computing the pear-ness of our cells

    In a body of work launched more than seven years ago, the Allen Institute team first built a collection of stem cells genetically engineered to light up different internal structures under a fluorescent microscope. With cell lines in hand that label 25 individual structures, the scientists then captured high-resolution, 3D images of more than 200,000 different cells.
    All this to ask one seemingly straightforward question: How do our cells organize their interiors?
    Getting to the answer, it turned out, is really complex. Imagine setting up your office with hundreds of different pieces of furniture, all of which need to be readily accessed, and many of which need to move freely or interact depending on their task. Now imagine your office is a sac of liquid surrounded by a thin membrane, and many of those hundreds of pieces of furniture are even smaller bags of liquid. Talk about an interior design nightmare.
    The scientists wanted to know: How do all those tiny cellular structures arrange themselves compared to each other? Is “structure A” always in the same place, or is it random?
    The team ran into a challenge comparing the same structure between two different cells. Even though the cells under study were genetically identical and reared in the same laboratory environment, their shapes varied substantially. The scientists realized that it would be impossible to compare the position of structure A in two different cells if one cell was short and blobby and the other was long and pear-shaped. So they put numbers on those stubby blobs and elongated pears.

    Using computational analyses, the team developed what they call a “shape space” that objectively describes each stem cell’s external shape. That shape space includes eight different dimensions of shape variation, things like height, volume, elongation, and the aptly described “pear-ness” and “bean-ness.” The scientists could then compare apples to apples (or beans to beans), looking at organization of cellular structures inside all similarly shaped cells.
    “We know that in biology, shape and function are interrelated, and understanding cell shape is important to understand how the cells function,” Viana said. “We’ve come up with a framework that allows us to measure a cell’s shape, and the moment you do that you can find cells that are similar shapes, and for those cells you can then look inside and see how everything is arranged.”
    Strict organization
    When they looked at the position of the 25 highlighted structures, comparing those structures in groups of cells with similar shapes, they found that all the cells set up shop in remarkably similar ways. Despite the massive variations in cell shape, their internal organization was strikingly consistent.
    If you’re looking at how thousands of white-collar workers arrange their furniture in a high-rise office building, it’s as if every worker put their desk smack in the middle of their office and their filing cabinet precisely in the far-left corner, no matter the size or shape of the office. Now say you found one office with a filing cabinet thrown on the floor and papers strewn everywhere — that might tell you something about the state of that particular office and its occupant.
    The same goes for cells. Finding deviations from the normal state of affairs could give scientists important information about how cells change when they transition from stationary to mobile, are getting ready to divide, or about what goes wrong at the microscopic level in disease. The researchers looked at two variations in their dataset — cells at the edges of colonies of cells, and cells that were undergoing division to create new daughter cells, a process known as mitosis. In these two states, the scientists were able to find changes in internal organization correlating to the cells’ different environments or activities.
    “This study brings together everything we’ve been doing at the Allen Institute for Cell Science since the institute was launched,” said Ru Gunawardane, Ph.D., Executive Director of the Allen Institute for Cell Science. “We built all of this from scratch, including the metrics to measure and compare different aspects of how cells are organized. What I’m truly excited about is how we and others in the community can now build on this and ask questions about cell biology that we could never ask before.” More

  • in

    A step towards solar fuels out of thin air

    A device that can harvest water from the air and provide hydrogen fuel — entirely powered by solar energy — has been a dream for researchers for decades. Now, EPFL chemical engineer Kevin Sivula and his team have made a significant step towards bringing this vision closer to reality. They have developed an ingenious yet simple system that combines semiconductor-based technology with novel electrodes that have two key characteristics: they are porous, to maximize contact with water in the air; and transparent, to maximize sunlight exposure of the semiconductor coating. When the device is simply exposed to sunlight, it takes water from the air and produces hydrogen gas. The results are published on 4 January 2023 in Advanced Materials.
    What’s new? It’s their novel gas diffusion electrodes, which are transparent, porous and conductive, enabling this solar-powered technology for turning water — in its gas state from the air — into hydrogen fuel.
    “To realize a sustainable society, we need ways to store renewable energy as chemicals that can be used as fuels and feedstocks in industry. Solar energy is the most abundant form of renewable energy, and we are striving to develop economically-competitive ways to produce solar fuels,” says Sivula of EPFL’s Laboratory for Molecular Engineering of Optoelectronic Nanomaterials and principal investigator of the study.
    Inspiration from a plant’s leaf
    In their research for renewable fossil-free fuels, the EPFL engineers in collaboration with Toyota Motor Europe, took inspiration from the way plants are able to convert sunlight into chemical energy using carbon dioxide from the air. A plant essentially harvests carbon dioxide and water from its environment, and with the extra boost of energy from sunlight, can transform these molecules into sugars and starches, a process known as photosynthesis. The sunlight’s energy is stored in the form of chemical bonds inside of the sugars and starches.
    The transparent gas diffusion electrodes developed by Sivula and his team, when coated with a light harvesting semiconductor material, indeed act like an artificial leaf, harvesting water from the air and sunlight to produce hydrogen gas. The sunlight’s energy is stored in the form of hydrogen bonds.

    Instead of building electrodes with traditional layers that are opaque to sunlight, their substrate is actually a 3-dimensional mesh of felted glass fibers.
    Marina Caretti, lead author of the work, says, “Developing our prototype device was challenging since transparent gas-diffusion electrodes have not been previously demonstrated, and we had to develop new procedures for each step. However, since each step is relatively simple and scalable, I think that our approach will open new horizons for a wide range of applications starting from gas diffusion substrates for solar-driven hydrogen production.”
    From liquid water to humidity in the air
    Sivula and other research groups have previously shown that it is possible to perform artificial photosynthesis by generating hydrogen fuel from liquid water and sunlight using a device called a photoelectrochemical (PEC) cell. A PEC cell is generally known as a device that uses incident light to stimulate a photosensitive material, like a semiconductor, immersed in liquid solution to cause a chemical reaction. But for practical purposes, this process has its disadvantages e.g. it is complicated to make large-area PEC devices that use liquid.
    Sivula wanted to show that the PEC technology can be adapted for harvesting humidity from the air instead, leading to the development of their new gas diffusion electrode. Electrochemical cells (e.g. fuel cells) have already been shown to work with gases instead of liquids, but the gas diffusion electrodes used previously are opaque and incompatible with the solar-powered PEC technology.

    Now, the researchers are focusing their efforts into optimizing the system. What is the ideal fiber size? The ideal pore size? The ideal semiconductors and membrane materials? These are questions that are being pursued in the EU Project “Sun-to-X,” which is dedicated to advance this technology, and develop new ways to convert hydrogen into liquid fuels.
    Making transparent, gas-diffusion electrodes
    In order to make transparent gas diffusion electrodes, the researchers start with a type of glass wool, which is essentially quartz (also known as silicon oxide) fibers and process it into felt wafers by fusing the fibers together at high temperature. Next, the wafer is coated with a transparent thin film of fluorine-doped tin oxide, known for its excellent conductivity, robustness and ease to scale-up. These first steps result in a transparent, porous, and conducting wafer, essential for maximizing contact with the water molecules in the air and letting photons through. The wafer is then coated again, this time with a thin-film of sunlight-absorbing semiconductor materials. This second thin coating still lets light through, but appears opaque due to the large surface area of the porous substrate. As is, this coated wafer can already produce hydrogen fuel once exposed to sunlight.
    The scientists went on to build a small chamber containing the coated wafer, as well as a membrane for separating the produced hydrogen gas for measurement. When their chamber is exposed to sunlight under humid conditions, hydrogen gas is produced, achieving what the scientists set out to do, showing that the concept of a transparent gas- diffusion electrode for solar-powered hydrogen gas production can be achieved.
    While the scientists did not formally study the solar-to-hydrogen conversion efficiency in their demonstration, they acknowledge that it is modest for this prototype, and currently less than can be achieved in liquid-based PEC cells. Based on the materials used, the maximum theoretical solar-to-hydrogen conversion efficiency of the coated wafer is 12%, whereas liquid cells have been demonstrated up to 19% efficient. More