More stories

  • in

    Controlling insulin production with a smartwatch

    Many modern fitness trackers and smartwatches feature integrated LEDs. The green light emitted, whether continuous or pulsed, penetrates the skin and can be used to measure the wearer’s heart rate during physical activity or while at rest.
    These watches have become extremely popular. A team of ETH researchers now wants to capitalise on that popularity by using the LEDs to control genes and change the behaviour of cells through the skin. The team is led by Martin Fussenegger from the Department of Biosystems Science and Engineering in Basel. He explains the challenge to this undertaking: “No naturally occurring molecular system in human cells responds to green light, so we had to build something new.”
    Green light from the smartwatch activates the gene
    The ETH professor and his colleagues ultimately developed a molecular switch that, once implanted, can be activated by the green light of a smartwatch.
    The switch is linked to a gene network that the researchers introduced into human cells. As is customary, they used HEK 293 cells for the prototype. Depending on the configuration of this network — in other words, the genes it contains — it can produce insulin or other substances as soon as the cells are exposed to green light. Turning the light off inactivates the switch and halts the process.
    Standard software
    As they used the standard smartwatch software, there was no need for the researchers to develop dedicated programs. During their tests, they turned the green light on by starting the running app. “Off-the-shelf watches offer a universal solution to flip the molecular switch,” Fussenegger says. New models emit light pulses, which are even better suited to keeping the gene network running.
    The molecular switch is more complicated, however. A molecule complex was integrated into the membrane of the cells and linked to a connecting piece, similar to the coupling of a railway carriage. As soon as green light is emitted, the component that projects into the cell becomes detached and is transported to the cell nucleus where it triggers an insulin-producing gene. When the green light is extinguished, the detached piece reconnects with its counterpart embedded in the membrane.
    Controlling implants with wearables
    The researchers tested their system on both pork rind and live mice by implanting the appropriate cells into them and strapping a smartwatch on like a rucksack. Opening the watch’s running program, the researchers turned on the green light to activate the cascade.
    “It’s the first time that an implant of this kind has been operated using commercially available, smart electronic devices — known as wearables because they are worn directly on the skin,” the ETH professor says. Most watches emit green light, a practical basis for a potential application as there is no need for users to purchase a special device.
    According to Fussenegger, however, it seems unlikely that this technology will enter clinical practice for at least another ten years. The cells used in this prototype would have to be replaced by the user’s own cells. Moreover, the system has to go through the clinical phases before it can be approved, meaning major regulatory hurdles. “To date, only very few cell therapies have been approved,” Fussenegger says.
    Story Source:
    Materials provided by ETH Zurich. Original written by Peter Rüegg. Note: Content may be edited for style and length. More

  • in

    Magnetism drives metals to insulators in new experiment

    Like all metals, silver, copper, and gold are conductors. Electrons flow across them, carrying heat and electricity. While gold is a good conductor under any conditions, some materials have the property of behaving like metal conductors only if temperatures are high enough; at low temperatures, they act like insulators and do not do a good job of carrying electricity. In other words, these unusual materials go from acting like a chunk of gold to acting like a piece of wood as temperatures are lowered. Physicists have developed theories to explain this so-called metal-insulator transition, but the mechanisms behind the transitions are not always clear.
    “In some cases, it is not easy to predict whether a material is a metal or an insulator,” explains Caltech visiting associate Yejun Feng of the Okinawa Institute for Science and Technology Graduate University. “Metals are always good conductors no matter what, but some other so-called apparent metals are insulators for reasons that are not well understood.” Feng has puzzled over this question for at least five years; others on his team, such as collaborator David Mandrus at the University of Tennessee, have thought about the problem for more than two decades.
    Now, a new study from Feng and colleagues, published in Nature Communications, offers the cleanest experimental proof yet of a metal-insulator transition theory proposed 70 years ago by physicist John Slater. According to that theory, magnetism, which results when the so-called “spins” of electrons in a material are organized in an orderly fashion, can solely drive the metal-insulator transition; in other previous experiments, changes in the lattice structure of a material or electron interactions based on their charges have been deemed responsible.
    “This is a problem that goes back to a theory introduced in 1951, but until now it has been very hard to find an experimental system that actually demonstrates the spin-spin interactions as the driving force because of confounding factors,” explains co-author Thomas Rosenbaum, a professor of physics at Caltech who is also the Institute’s president and the Sonja and William Davidow Presidential Chair.
    “Slater proposed that, as the temperature is lowered, an ordered magnetic state would prevent electrons from flowing through the material,” Rosenbaum explains. “Although his idea is theoretically sound, it turns out that for the vast majority of materials, the way that electrons interact with each other electronically has a much stronger effect than the magnetic interactions, which made the task of proving the Slater mechanism challenging.”
    The research will help answer fundamental questions about how different materials behave, and may also have applications in technology, for example in the field of spintronics, in which the spins of electrons would form the basis of electrical devices instead of the electron charges as is routine now. “Fundamental questions about metal and insulators will be relevant in the upcoming technological revolution,” says Feng. More

  • in

    New form of silicon could enable next-gen electronic and energy devices

    A team led by Carnegie’s Thomas Shiell and Timothy Strobel developed a new method for synthesizing a novel crystalline form of silicon with a hexagonal structure that could potentially be used to create next-generation electronic and energy devices with enhanced properties that exceed those of the “normal” cubic form of silicon used today.
    Their work is published in Physical Review Letters.
    Silicon plays an outsized role in human life. It is the second most abundant element in the Earth’s crust. When mixed with other elements, it is essential for many construction and infrastructure projects. And in pure elemental form, it is crucial enough to computing that the longstanding technological hub of the U.S. — California’s Silicon Valley — was nicknamed in honor of it.
    Like all elements, silicon can take different crystalline forms, called allotropes, in the same way that soft graphite and super-hard diamond are both forms of carbon. The form of silicon most commonly used in electronic devices, including computers and solar panels, has the same structure as diamond. Despite its ubiquity, this form of silicon is not actually fully optimized for next-generation applications, including high-performance transistors and some photovoltaic devices.
    While many different silicon allotropes with enhanced physical properties are theoretically possible, only a handful exist in practice given the lack of known synthetic pathways that are currently accessible.
    Strobel’s lab had previously developed a revolutionary new form of silicon, called Si24, which has an open framework composed of a series of one-dimensional channels. In this new work, Shiell and Strobel led a team that used Si24 as the starting point in a multi-stage synthesis pathway that resulted in highly oriented crystals in a form called 4H-silicon, named for its four repeating layers in a hexagonal structure.
    “Interest in hexagonal silicon dates back to the 1960s, because of the possibility of tunable electronic properties, which could enhance performance beyond the cubic form” Strobel explained.
    Hexagonal forms of silicon have been synthesized previously, but only through the deposition of thin films or as nanocrystals that coexist with disordered material. The newly demonstrated Si24 pathway produces the first high-quality, bulk crystals that serve as the basis for future research activities.
    Using the advanced computing tool called PALLAS, which was previously developed by members of the team to predict structural transition pathways — like how water becomes steam when heated or ice when frozen — the group was able to understand the transition mechanism from Si24 to 4H-Si, and the structural relationship that allows the preservation of highly oriented product crystals.
    “In addition to expanding our fundamental control over the synthesis of novel structures, the discovery of bulk 4H-silicon crystals opens the door to exciting future research prospects for tuning the optical and electronic properties through strain engineering and elemental substitution,” Shiell said. “We could potentially use this method to create seed crystals to grow large volumes of the 4H structure with properties that potentially exceed those of diamond silicon.”
    Story Source:
    Materials provided by Carnegie Institution for Science. Note: Content may be edited for style and length. More

  • in

    Early warning system for COVID-19 gets faster through wastewater detection and tracing

    Math continues to be a powerful force against COVID-19.
    Its latest contribution is a sophisticated algorithm, using municipal wastewater systems, for determining key locations in the detection and tracing of COVID-19 back to its human source, which may be a newly infected person or a hot spot of infected people. Timing is key, say the researchers who created the algorithm, especially when COVID-19 is getting better at transmitting itself, thanks to emerging variants.
    “Being quick is what we want because in the meantime, a newly-infected person can infect others,” said Oded Berman, a professor of operations management and statistics at the University of Toronto’s Rotman School of Management.
    This latest research builds on previous work Prof. Berman did with co-investigators Richard Larson of the Massachusetts Institute of Technology and Mehdi Nourinejad of York University. The trio initially developed two algorithms for identifying choice locations in a sewer system for manual COVID-19 testing and subsequent tracing back to the source. Sewers are a rich environment for detecting presence of the disease upstream because genetic remnants of its virus are shed in the stool of infected people up to a week before they may even know they are sick.
    The investigators’ new research refines and optimizes that initial work by more accurately modelling a typical municipal sewer system’s treelike network of one-way pipes and manholes, and by speeding up the detection/tracing process through automatic sensors installed in specific manholes, chosen according to an easier-to-use algorithm.
    Under this scenario, a sensor sends out an alert any time COVID-19 is detected. Manual testing is then done at a few manholes further upstream, also chosen according to the algorithm, until the final source is located, be that a small group of homes or a “hotspot” neighbourhood. Residents in that much smaller area can then be contacted for further testing and isolation as needed, limiting potential new outbreaks. More

  • in

    An atom chip interferometer that could detect quantum gravity

    Physicists in Israel have created a quantum interferometer on an atom chip. This device can be used to explore the fundamentals of quantum theory by studying the interference pattern between two beams of atoms. University of Groningen physicist, Anupam Mazumdar, describes how the device could be adapted to use mesoscopic particles instead of atoms. This modification would allow for expanded applications. A description of the device, and theoretical considerations concerning its application by Mazumdar, were published on 28 May in the journal Science Advances.
    The device which scientists from the Ben-Gurion University of the Negev created is a so-called Stern Gerlach Interferometer, which was first proposed one hundred years ago by German physicists Otto Stern and Walter Gerlach. Their original aim of creating an interferometer with freely propagating atoms exposed to gradients from macroscopic magnets has not been practically realized until now. ‘Such experiments have been done using photons, but never with atoms’, explains Anupam Mazumdar, Professor of Theoretical Physics at the University of Groningen and one of the co-authors of the article in Science Advances.
    Diamonds The Israeli scientists, led by Professor Ron Folman, created an interferometer on an atom chip, which can confine and/or manipulate atoms. A beam of rubidium atoms is levitated over the chip using magnets. Magnetic gradients are used to split the beam according to the spin values of the individual atoms. Spin is a magnetic moment that can have two values, either up or down. The spin-up and spin-down atoms are separated by a magnetic gradient. Subsequently, the two divergent beams are brought together again and recombined. The spin values are then measured, and an interference pattern is formed. Spin is a quantum phenomenon, and throughout this interferometer, the opposing spins are entangled. This makes the interferometer sensitive to other quantum phenomena.
    Mazumdar was not involved in the construction of the chip, but he contributed theoretical insights to the paper. Together with a number of his colleagues, he previously proposed an experiment to determine whether gravity is in fact a quantum phenomenon using entangled mesoscopic objects, namely tiny diamonds that can be brought in a state of quantum superposition. ‘It would be possible to use these diamonds instead of the rubidium atoms on this interferometer’, he explains. However, this process would be highly complex as the device, which is currently operated at room temperature, would need to be cooled down to around 1 Kelvin for the mesoscopic experiment.
    Free fall If this is realized, two of these atom chips could free fall together (to neutralize external gravity), so that any interaction occurring between them would depend on the gravitational pull between the two chips. Mazumdar and his colleagues aim to determine whether quantum entanglement of the pair occurs during free fall, which would mean that the force of gravity between the diamonds is indeed a quantum phenomenon. Another application of this experiment is the detection of gravity waves; their deformation of space-time should be visible in the interference pattern.
    The actual implementation of this experiment is still a long way off, but Mazumdar is very excited now that the interferometer has been created. ‘It is already [a] quantum sensor, although we still have to work out exactly what it can detect. The experiment is like the first steps of a baby — now, we have to guide it to reach maturity.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    Using HPC and experiment, researchers continue to refine graphene production

    Graphene may be among the most exciting scientific discoveries of the last century. While it is strikingly familiar to us — graphene is considered an allotrope of carbon, meaning that it essentially the same substance as graphite but in a different atomic structure — graphene also opened up a new world of possibilities for designing and building new technologies.
    The material is two-dimensional, meaning that each “sheet” of graphene is only 1 atom thick, but its bonds make it as strong as some of the world’s hardest metal alloys while remaining lightweight and flexible. This valuable, unique mix of properties have piqued the interest of scientists from a wide range of fields, leading to research in using graphene for next-generation electronics, new coatings on industrial instruments and tools, and new biomedical technologies.
    It is perhaps graphene’s immense potential that has consequently caused one of its biggest challenges — graphene is difficult to produce in large volumes, and demand for the material is continually growing. Recent research indicates that using a liquid copper catalyst may be a fast, efficient way for producing graphene, but researchers only have a limited understanding of molecular interactions happening during these brief, chaotic moments that lead to graphene formation, meaning they cannot yet use the method to reliably produce flawless graphene sheets.
    In order to address these challenges and help develop methods for quicker graphene production, a team of researchers at the Technical University of Munich (TUM) has been using the JUWELS and SuperMUC-NG high-performance computing (HPC) systems at the Jülich Supercomputing Centre (JSC) and Leibniz Supercomputing Centre (LRZ) to run high-resolution simulations of graphene formation on liquid copper.
    A window into experiment
    Graphene’s appeal primarily stems from the material’s perfectly uniform crystal structure, meaning that producing graphene with impurities is wasted effort. For laboratory settings or circumstances where only a small amount of graphene is needed, researchers can place a piece of scotch tape onto a graphite crystal and “peel” away atomic layers of the graphite using a technique that resembles how one would use tape or another adhesive to help remove pet hair from clothing. While this reliably produces flawless graphene layers, the process is slow and impractical for creating graphene for large-scale applications. More

  • in

    Let's talk about the elephant in the data

    You would not be surprised to see an elephant in the savanna or a plate in your kitchen. Based on your prior experiences and knowledge, you know that is where elephants and plates are often to be found. If you saw a mysterious object in your kitchen, how would you figure out what it was? You would rely on your expectations or prior knowledge. Should a computer approach the problem in the same way? The answer may surprise you. Cold Spring Harbor Laboratory Professor Partha Mitra described how he views problems like these in a “Perspective” in Nature Machine Intelligence. He hopes his insights will help researchers teach computers how to analyze complex systems more effectively.
    Mitra thinks it helps to understand the nature of knowledge. Mathematically speaking, many data scientists try to create a model that can “fit an elephant,” or a set of complex data points. Mitra asks researchers to consider what philosophical framework would work best for a particular machine learning task:
    “In philosophical terms, the idea is that there are these two extremes. One, you could say “rationalist,” and the other “empiricist” points of view. And really, it’s about the role of prior knowledge or prior assumptions.”
    Rationalists versus empiricists
    A rationalist views the world through the lens of prior knowledge. They expect a plate to be in a kitchen and an elephant in a savanna.
    An empiricist analyzes the data exactly as it is presented. When they visit the savanna, they no more expect to see an elephant than they do a plate. More

  • in

    A better way to introduce digital tech in the workplace

    When bringing technologies into the workplace, it pays to be realistic. Often, for instance, bringing new digital technology into an organization does not radically improve a firm’s operations. Despite high-level planning, a more frequent result is the messy process of frontline employees figuring out how they can get tech tools to help them to some degree.
    That task can easily fall on overburdened workers who have to grapple with getting things done, but don’t always have much voice in an organization. So isn’t there a way to think systematically about implementing digital technology in the workplace?
    MIT Professor Kate Kellogg thinks there is, and calls it “experimentalist governance of digital technology”: Let different parts of an organization experiment with the technology — and then centrally remove roadblocks to adopt the best practices that emerge, firm-wide.
    “If you want to get value out of new digital technology, you need to allow local teams to adapt the technology to their setting,” says Kellogg, the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management. “You also need to form a central group that’s tracking all these local experiments, and revising processes in response to problems and possibilities. If you just let everyone do everything locally, you’re going to see resistance to the technology, particularly among frontline employees.”
    Kellogg’s perspective comes after she conducted an 18-month close ethnographic study of a teaching hospital, examining many facets of its daily workings — including things like the integration of technology into everyday medical practices.
    Some of the insights from that organizational research now appear in a paper Kellogg has written, “Local Adaptation Without Work Intensification: Experimentalist Governance of Digital Technology for Mutually Beneficial Role Reconfiguration in Organizations,” recently published online in the journal Organization Science. More