More stories

  • in

    At the edge of graphene-based electronics

    A pressing quest in the field of nanoelectronics is the search for a material that could replace silicon. Graphene has seemed promising for decades. But its potential faltered along the way, due to damaging processing methods and the lack of a new electronics paradigm to embrace it. With silicon nearly maxed out in its ability to accommodate faster computing, the next big nanoelectronics platform is needed now more than ever.
    Walter de Heer, Regents’ Professor in the School of Physics at the Georgia Institute of Technology, has taken a critical step forward in making the case for a successor to silicon. De Heer and his collaborators developed a new nanoelectronics platform based on graphene — a single sheet of carbon atoms. The technology is compatible with conventional microelectronics manufacturing, a necessity for any viable alternative to silicon. In the course of their research, published in Nature Communications, the team may have also discovered a new quasiparticle. Their discovery could lead to manufacturing smaller, faster, more efficient, and more sustainable computer chips, and has potential implications for quantum and high-performance computing.
    “Graphene’s power lies in its flat, two-dimensional structure that is held together by the strongest chemical bonds known,” de Heer said. “It was clear from the beginning that graphene can be miniaturized to a far greater extent than silicon — enabling much smaller devices, while operating at higher speeds and producing much less heat. This means that, in principle, more devices can be packed on a single chip of graphene than with silicon.”
    In 2001, de Heer proposed an alternative form of electronics based on epitaxial graphene, or epigraphene — a layer of graphene that was found to spontaneously form on top of silicon carbide crystal, a semiconductor used in high power electronics. At the time, researchers found that electric currents flow without resistance along epigraphene’s edges, and that graphene devices could be seamlessly interconnected without metal wires. This combination allows for a form of electronics that relies on the unique light-like properties of graphene electrons.
    “Quantum interference has been observed in carbon nanotubes at low temperatures, and we expect to see similar effects in epigraphene ribbons and networks,” de Heer said. “This important feature of graphene is not possible with silicon.”
    Building the Platform
    To create the new nanoelectronics platform, the researchers created a modified form of epigraphene on a silicon carbide crystal substrate. In collaboration with researchers at the Tianjin International Center for Nanoparticles and Nanosystems at the University of Tianjin, China, they produced unique silicon carbide chips from electronics-grade silicon carbide crystals. The graphene itself was grown at de Heer’s laboratory at Georgia Tech using patented furnaces. More

  • in

    The physical intelligence of ant and robot collectives

    Individual ants are relatively simple creatures and yet a colony of ants can perform really complex tasks, such as intricate construction, foraging and defense.
    Recently, Harvard researchers took inspiration from ants to design a team of relatively simple robots that can work collectively to perform complex tasks using only a few basic parameters.
    The research was published in ELife.
    “This project continued along an abiding interest in understanding the collective dynamics of social insects such as termites and bees, especially how these insects can manipulate the environment to create complex functional architectures,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and Physics, and senior author of paper.
    The research team began by studying how black carpenter ants work together to excavate out of and escape from a soft corral.
    “At first, the ants inside the corral moved around randomly, communicating via their antennae before they started working together to escape the corral,” said S Ganga Prasath, a postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences and one of the lead authors of the paper. More

  • in

    Should we tax robots?

    What if the U.S. placed a tax on robots? The concept has been publicly discussed by policy analysts, scholars, and Bill Gates (who favors the notion). Because robots can replace jobs, the idea goes, a stiff tax on them would give firms incentive to help retain workers, while also compensating for a dropoff in payroll taxes when robots are used. Thus far, South Korea has reduced incentives for firms to deploy robots; European Union policymakers, on the other hand, considered a robot tax but did not enact it.
    Now a study by MIT economists scrutinizes the existing evidence and suggests the optimal policy in this situation would indeed include a tax on robots, but only a modest one. The same applies to taxes on foreign trade that would also reduce U.S. jobs, the research finds.
    “Our finding suggests that taxes on either robots or imported goods should be pretty small,” says Arnaud Costinot, an MIT economist, and co-author of a published paper detailing the findings. “Although robots have an effect on income inequality … they still lead to optimal taxes that are modest.”
    Specifically, the study finds that a tax on robots should range from 1 percent to 3.7 percent of their value, while trade taxes would be from 0.03 percent to 0.11 percent, given current U.S. income taxes.
    “We came in to this not knowing what would happen,” says Iván Werning, an MIT economist and the other co-author of the study. “We had all the potential ingredients for this to be a big tax, so that by stopping technology or trade you would have less inequality, but … for now, we find a tax in the one-digit range, and for trade, even smaller taxes.”
    The paper, “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation,” appears in advance online form in The Review of Economic Studies. Costinot is a professor of economics and associate head of the MIT Department of Economics; Werning is the department’s Robert M. Solow Professor of Economics. More

  • in

    Crystalline materials: Making the unimaginable possible

    The world’s best artists can take a handful of differently colored paints and create a museum-worthy canvas that looks like nothing else. They do so by drawing upon inspiration, knowledge of what’s been done in the past and design rules they learned after years in the studio.
    Chemists work in a similar way when inventing new compounds. Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Northwestern University and The University of Chicago have developed a new method for discovering and making new crystalline materials with two or more elements.
    “We expect that our work will prove extremely valuable to the chemistry, materials and condensed matter communities for synthesizing new and currently unpredictable materials with exotic properties,” said Mercouri Kanatzidis, a chemistry professor at Northwestern with a joint appointment at Argonne.
    “Our invention method grew out of research on unconventional superconductors,” said Xiuquan Zhou, a postdoc at Argonne and first author of the paper. ​”These are solids with two or more elements, at least one of which is not a metal. And they cease to resist the passage of electricity at different temperatures — anywhere from colder than outer space to that in my office.”
    Over the last five decades, scientists have discovered and made many unconventional superconductors with surprising magnetic and electrical properties. Such materials have a wide gamut of possible applications, such as improved power generation, energy transmission and high-speed transportation. They also have the potential for incorporation into future particle accelerators, magnetic resonance imaging systems, quantum computers and energy-efficient microelectronics.
    The team’s invention method starts with a solution made of two components. One is a highly effective solvent. It dissolves and reacts with any solids added to the solution. The other is not as good a solvent. But it is there for tuning the reaction to produce a new solid upon addition of different elements. This tuning involves changing the ratio of the two components and the temperature. Here, the temperature is quite high, from 750 to 1,300 degrees Fahrenheit. More

  • in

    3D-patient tumor avatars: Maximizing their potential for next-generation precision oncology

    At any time, most cancer patients are receiving a treatment that does not significantly benefit them while enduring bodily and financial toxicity. Aiming to guide each patient to the most optimal treatment, precision medicine has been expanding from genetic mutations to other drivers of clinical outcome. There has been a concerted effort to create “avatars” of patient tumors for testing and selecting therapies before administering them into patients.
    A recently published Cancer Cell paper, which represents several National Cancer Institute consortia and includes key opinion leaders from both the research and clinical sectors in the United States and Europe, laid out the vision for next-generation, functional precision medicine by recommending measures to enable 3D patient tumor avatars (3D-PTAs) to guide treatment decisions in the clinic. According to Dr. Xiling Shen, the corresponding author of this article and the chief scientific officer of the Terasaki Institute for Biomedical Innovation, the power of 3D-PTAs, which include patient-derived organoids, 3D bioprinting, and microscale models, lie in their accurate real-life depiction of a tumor with its microenvironment and their speed and scalability to test and predict the efficacy of prospective therapeutic drugs. To fully realize this aim and maximize clinical accuracy, however, many steps are needed to standardize methods and criteria, design clinical trials, and incorporate complete patient data for the best possible outcome in personalized care.
    The use of such tools and resources can involve a great variety of materials, methods, and handling of data, however, and to ensure the accuracy and integrity for any clinical decision making, major efforts are needed to aggregate, standardize, and validate the uses of 3D-PTAs. Attempts by the National Cancer Institute’s Patient-Derived Models of Cancer Consortium and other groups have initiated official protocol standardizations, and much work needs to be done.
    The authors emphasize that in addition to unifying and standardizing protocols over a widespread number of research facilities, there must be quantification using validated software pipelines, and information must be codified and shared amongst all the research groups involved. They also recommend that more extensive and far-reaching clinical patient profile be compiled, which encompass every facet of a patient’s history, including not only medical, but demographic information as well; these are important factors in patient outcome. To achieve standardization in this regard, regulatory infrastructure provided by the National Institutes of Health and other institutes and journals must also be included to allow reliable global data sharing and access.
    Clinical trials are also a major part of the 3D-PTA effort, and to date, studies have been conducted to examine clinical trial workflows and turnaround times using 3D-PTA. The authors advise innovative clinical trial designs that can help with selecting patients for specific trials or custom treatments, especially when coupled with the patient’s clinical and demographic information.
    Combining these patient omics profiles with information in 3D-PTA functional data libraries can be facilitated by well-defined computational pipelines, and the authors advocate the utilization of relevant consortia, such as NCI Patient-Derived Model of Cancer Program, PDXnet, Tissue Engineering Collaborative, and Cancer Systems Biology Centers as well as European research infrastructure such as INFRAFRONTIER, EuroPDX)
    Integrating data from existing 3D-PTA initiatives, consortia, and biobanks with omics profiles can bring precision medicine to a new level, providing enhanced vehicles for making optimum choices among approved therapeutic drugs, as well as investigational, alternative, non-chemotherapeutic drugs. It can also provide solutions for patients experiencing drug resistance and expand opportunities for drug repurposing.
    “The integration of the 3D-PTA platform is a game-changing tool for oncological drug development,” said Ali Khademhosseini, Director and CEO for the Terasaki Institute for Biomedical Innovation. “We must combine it in a robust fashion with existing cancer genomics to produce the most powerful paradigm for precision oncology.” More

  • in

    Harnessing artificial intelligence technology for IVF embryo selection

    An artificial intelligence algorithm can determine non-invasively, with about 70 percent accuracy, if an in vitro fertilized embryo has a normal or abnormal number of chromosomes, according to a new study from researchers at Weill Cornell Medicine.
    Having an abnormal number of chromosomes, a condition called aneuploidy, is a major reason embryos derived from in vitro fertilization (IVF) fail to implant or result in a healthy pregnancy. One of the current methods for detecting aneuploidy involves the biopsy-like sampling and genetic testing of cells from an embryo — an approach that adds cost to the IVF process and is invasive to the embryo. The new algorithm, STORK-A, described in a paper published Dec. 19 in Lancet Digital Health, can help predict aneuploidy without the disadvantages of biopsy. It operates by analyzing microscope images of the embryo and incorporates information about maternal age and the IVF clinic’s scoring of the embryo’s appearance.
    “Our hope is that we’ll ultimately be able to predict aneuploidy in a completely non-invasive way, using artificial intelligence and computer vision techniques,” said study senior author Dr. Iman Hajirasouliha, associate professor of computational genomics and of physiology and biophysics at Weill Cornell Medicine and a member of the Englander Institute for Precision Medicine.
    The study’s first author is Josue Barnes, a doctoral student in the Weill Cornell Graduate School of Medical Sciences who studies in the Hajirasouliha Laboratory. Dr. Nikica Zaninovic, associate professor of embryology in clinical obstetrics and gynecology and director of the Embryology Laboratory at the Ronald O. Perelman and Claudia Cohen Center for Reproductive Medicine at Weill Cornell Medicine and NewYork-Presbyterian/Weill Cornell Medical Center led the embryology work for the study.
    According to the U.S. Centers for Disease Control and Prevention, there were more than 300,000 IVF cycles performed in the United States in 2020, resulting in about 80,000 live births. IVF experts are always looking for ways to boost that success rate, to achieve more successful pregnancies with fewer embryo transfers — which means developing better methods for identifying viable embryos.
    Fertility clinic staff currently use microscopy to assess embryos for large-scale abnormalities that correlate with poor viability. To obtain information about the chromosomes, clinic staff may also use a biopsy method called preimplantation genetic testing for aneuploidy (PGT-A), predominantly in women over the age of 37. More

  • in

    Strong metaphorical messages can help tackle toxic e-waste

    Consumers told that not recycling their batteries ‘risked polluting the equivalent of 140 Olympic swimming pools every year’ were more likely to participate in an electronic waste recycling scheme, a new study has found.
    The paper from the University of Portsmouth explores how to improve our sustainable disposal of electronic waste (e-waste).
    With Christmas around the corner and consumers buying the latest mobile phones, tablets, headphones and televisions, older electronic products become defunct and add to the alarming quantity of potentially toxic e-waste.
    Experts at the University of Portsmouth carried out research to test what factors encourage consumers to safely dispose of e-waste, which will be useful for managers and policy-makers implementing disposal schemes.
    Lead author of the study, Dr Diletta Acuti, from the University’s Faculty of Business and Law, said: “The world’s electronic waste is an enormous problem which needs to be addressed urgently. E-waste often contains hazardous substances, such as mercury, lead or acid, which ends up in landfills without any treatment or special precautions, causing significant long-term damage to the environment and human health.
    “Adequate treatment of this waste is therefore an environmental necessity.”
    In 2019, 205,000 tons of portable batteries were sold in Europe, but only half were collected for recycling. Dr Acuti’s research looks specifically at the disposal of batteries. More

  • in

    Signal processing algorithms improved turbulence in free-space optic tests

    New signal-processing algorithms have been shown to help mitigate the impact of turbulence in free-space optical experiments, potentially bringing ‘free space’ internet a step closer to reality.
    The team of researchers, from Aston University’s Aston Institute of Photonic Technologies and Glasgow University, used commercially available photonic lanterns, a commercial transponder, and a spatial light modulator to emulate turbulence. By applying a successive interference cancellation digital signal processing algorithm, they achieved record results.
    The findings are published in the IEEE Journal of Lightwave Technology.
    Free space optical technology wirelessly transmits data as light through the air around us — called ‘free space’ — for use in telecoms or computer networking. Because free space optical communication doesn’t require the expensive laying of fibre cables, it’s seen as an exciting development in bringing communications to places where there is limited existing infrastructure.
    But because data is sent as pulses of light, weather conditions can cause problems. A bright sunny day or thick fog can diffract or scintillate the beam of light, creating turbulence which causes data to be lost.
    The researchers simultaneously transmitted multiple data signals using different spatially shaped beams of light using a so-called photonic lantern. Turbulence changes the shape of the beams, often losing the signal if only a single simple shape is transmitted and detected, but by detecting light with these shapes using a second lantern, more of the light is collected at the receiver, and the original data can be unscrambled. This can greatly reduce the impact of the atmosphere on the quality of the data received, in a technique known as Multiple-input multiple-output (MIMO) digital signal processing.
    Professor Andrew Ellis at Aston University said: “Using a single beam, when a single beam was transmitted, turbulence similar to a hot sunny day destroyed the signal 50% of the time. By transmitting multiple beams of different shapes through the same telescopes and detecting the different shapes, not only did we increase the availability to more than 99%, we increased the capacity to more than 500 Gbit/s, or more than 500 ultra-fast Pure-Fibre broadband links.”
    A project investigating the real-world applications of FSO technology is presently underway in South Africa, where researchers from Aston University and Glasgow University are working with the University of the Witwatersrand in Johannesburg to attempt to bring internet access to communities living in informal settlements and schools in underprivileged areas.
    The Fibre Before the Fibre Project, aims to provide the internet performance of a Pure-Fibre connection without the need to install cables. It uses a free space optical communication system that can link to remote sites using a wireless optical line of site signal to link to nearby fibre sources in more affluent suburbs.
    Professor Ellis said: “Our role in the project is to look at the impact and educational benefit free space optics will have for the school children who will finally be able to access the internet.”
    Story Source:
    Materials provided by Aston University. Note: Content may be edited for style and length. More