More stories

  • in

    Should we tax robots?

    What if the U.S. placed a tax on robots? The concept has been publicly discussed by policy analysts, scholars, and Bill Gates (who favors the notion). Because robots can replace jobs, the idea goes, a stiff tax on them would give firms incentive to help retain workers, while also compensating for a dropoff in payroll taxes when robots are used. Thus far, South Korea has reduced incentives for firms to deploy robots; European Union policymakers, on the other hand, considered a robot tax but did not enact it.
    Now a study by MIT economists scrutinizes the existing evidence and suggests the optimal policy in this situation would indeed include a tax on robots, but only a modest one. The same applies to taxes on foreign trade that would also reduce U.S. jobs, the research finds.
    “Our finding suggests that taxes on either robots or imported goods should be pretty small,” says Arnaud Costinot, an MIT economist, and co-author of a published paper detailing the findings. “Although robots have an effect on income inequality … they still lead to optimal taxes that are modest.”
    Specifically, the study finds that a tax on robots should range from 1 percent to 3.7 percent of their value, while trade taxes would be from 0.03 percent to 0.11 percent, given current U.S. income taxes.
    “We came in to this not knowing what would happen,” says Iván Werning, an MIT economist and the other co-author of the study. “We had all the potential ingredients for this to be a big tax, so that by stopping technology or trade you would have less inequality, but … for now, we find a tax in the one-digit range, and for trade, even smaller taxes.”
    The paper, “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation,” appears in advance online form in The Review of Economic Studies. Costinot is a professor of economics and associate head of the MIT Department of Economics; Werning is the department’s Robert M. Solow Professor of Economics. More

  • in

    Crystalline materials: Making the unimaginable possible

    The world’s best artists can take a handful of differently colored paints and create a museum-worthy canvas that looks like nothing else. They do so by drawing upon inspiration, knowledge of what’s been done in the past and design rules they learned after years in the studio.
    Chemists work in a similar way when inventing new compounds. Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Northwestern University and The University of Chicago have developed a new method for discovering and making new crystalline materials with two or more elements.
    “We expect that our work will prove extremely valuable to the chemistry, materials and condensed matter communities for synthesizing new and currently unpredictable materials with exotic properties,” said Mercouri Kanatzidis, a chemistry professor at Northwestern with a joint appointment at Argonne.
    “Our invention method grew out of research on unconventional superconductors,” said Xiuquan Zhou, a postdoc at Argonne and first author of the paper. ​”These are solids with two or more elements, at least one of which is not a metal. And they cease to resist the passage of electricity at different temperatures — anywhere from colder than outer space to that in my office.”
    Over the last five decades, scientists have discovered and made many unconventional superconductors with surprising magnetic and electrical properties. Such materials have a wide gamut of possible applications, such as improved power generation, energy transmission and high-speed transportation. They also have the potential for incorporation into future particle accelerators, magnetic resonance imaging systems, quantum computers and energy-efficient microelectronics.
    The team’s invention method starts with a solution made of two components. One is a highly effective solvent. It dissolves and reacts with any solids added to the solution. The other is not as good a solvent. But it is there for tuning the reaction to produce a new solid upon addition of different elements. This tuning involves changing the ratio of the two components and the temperature. Here, the temperature is quite high, from 750 to 1,300 degrees Fahrenheit. More

  • in

    3D-patient tumor avatars: Maximizing their potential for next-generation precision oncology

    At any time, most cancer patients are receiving a treatment that does not significantly benefit them while enduring bodily and financial toxicity. Aiming to guide each patient to the most optimal treatment, precision medicine has been expanding from genetic mutations to other drivers of clinical outcome. There has been a concerted effort to create “avatars” of patient tumors for testing and selecting therapies before administering them into patients.
    A recently published Cancer Cell paper, which represents several National Cancer Institute consortia and includes key opinion leaders from both the research and clinical sectors in the United States and Europe, laid out the vision for next-generation, functional precision medicine by recommending measures to enable 3D patient tumor avatars (3D-PTAs) to guide treatment decisions in the clinic. According to Dr. Xiling Shen, the corresponding author of this article and the chief scientific officer of the Terasaki Institute for Biomedical Innovation, the power of 3D-PTAs, which include patient-derived organoids, 3D bioprinting, and microscale models, lie in their accurate real-life depiction of a tumor with its microenvironment and their speed and scalability to test and predict the efficacy of prospective therapeutic drugs. To fully realize this aim and maximize clinical accuracy, however, many steps are needed to standardize methods and criteria, design clinical trials, and incorporate complete patient data for the best possible outcome in personalized care.
    The use of such tools and resources can involve a great variety of materials, methods, and handling of data, however, and to ensure the accuracy and integrity for any clinical decision making, major efforts are needed to aggregate, standardize, and validate the uses of 3D-PTAs. Attempts by the National Cancer Institute’s Patient-Derived Models of Cancer Consortium and other groups have initiated official protocol standardizations, and much work needs to be done.
    The authors emphasize that in addition to unifying and standardizing protocols over a widespread number of research facilities, there must be quantification using validated software pipelines, and information must be codified and shared amongst all the research groups involved. They also recommend that more extensive and far-reaching clinical patient profile be compiled, which encompass every facet of a patient’s history, including not only medical, but demographic information as well; these are important factors in patient outcome. To achieve standardization in this regard, regulatory infrastructure provided by the National Institutes of Health and other institutes and journals must also be included to allow reliable global data sharing and access.
    Clinical trials are also a major part of the 3D-PTA effort, and to date, studies have been conducted to examine clinical trial workflows and turnaround times using 3D-PTA. The authors advise innovative clinical trial designs that can help with selecting patients for specific trials or custom treatments, especially when coupled with the patient’s clinical and demographic information.
    Combining these patient omics profiles with information in 3D-PTA functional data libraries can be facilitated by well-defined computational pipelines, and the authors advocate the utilization of relevant consortia, such as NCI Patient-Derived Model of Cancer Program, PDXnet, Tissue Engineering Collaborative, and Cancer Systems Biology Centers as well as European research infrastructure such as INFRAFRONTIER, EuroPDX)
    Integrating data from existing 3D-PTA initiatives, consortia, and biobanks with omics profiles can bring precision medicine to a new level, providing enhanced vehicles for making optimum choices among approved therapeutic drugs, as well as investigational, alternative, non-chemotherapeutic drugs. It can also provide solutions for patients experiencing drug resistance and expand opportunities for drug repurposing.
    “The integration of the 3D-PTA platform is a game-changing tool for oncological drug development,” said Ali Khademhosseini, Director and CEO for the Terasaki Institute for Biomedical Innovation. “We must combine it in a robust fashion with existing cancer genomics to produce the most powerful paradigm for precision oncology.” More

  • in

    Harnessing artificial intelligence technology for IVF embryo selection

    An artificial intelligence algorithm can determine non-invasively, with about 70 percent accuracy, if an in vitro fertilized embryo has a normal or abnormal number of chromosomes, according to a new study from researchers at Weill Cornell Medicine.
    Having an abnormal number of chromosomes, a condition called aneuploidy, is a major reason embryos derived from in vitro fertilization (IVF) fail to implant or result in a healthy pregnancy. One of the current methods for detecting aneuploidy involves the biopsy-like sampling and genetic testing of cells from an embryo — an approach that adds cost to the IVF process and is invasive to the embryo. The new algorithm, STORK-A, described in a paper published Dec. 19 in Lancet Digital Health, can help predict aneuploidy without the disadvantages of biopsy. It operates by analyzing microscope images of the embryo and incorporates information about maternal age and the IVF clinic’s scoring of the embryo’s appearance.
    “Our hope is that we’ll ultimately be able to predict aneuploidy in a completely non-invasive way, using artificial intelligence and computer vision techniques,” said study senior author Dr. Iman Hajirasouliha, associate professor of computational genomics and of physiology and biophysics at Weill Cornell Medicine and a member of the Englander Institute for Precision Medicine.
    The study’s first author is Josue Barnes, a doctoral student in the Weill Cornell Graduate School of Medical Sciences who studies in the Hajirasouliha Laboratory. Dr. Nikica Zaninovic, associate professor of embryology in clinical obstetrics and gynecology and director of the Embryology Laboratory at the Ronald O. Perelman and Claudia Cohen Center for Reproductive Medicine at Weill Cornell Medicine and NewYork-Presbyterian/Weill Cornell Medical Center led the embryology work for the study.
    According to the U.S. Centers for Disease Control and Prevention, there were more than 300,000 IVF cycles performed in the United States in 2020, resulting in about 80,000 live births. IVF experts are always looking for ways to boost that success rate, to achieve more successful pregnancies with fewer embryo transfers — which means developing better methods for identifying viable embryos.
    Fertility clinic staff currently use microscopy to assess embryos for large-scale abnormalities that correlate with poor viability. To obtain information about the chromosomes, clinic staff may also use a biopsy method called preimplantation genetic testing for aneuploidy (PGT-A), predominantly in women over the age of 37. More

  • in

    Strong metaphorical messages can help tackle toxic e-waste

    Consumers told that not recycling their batteries ‘risked polluting the equivalent of 140 Olympic swimming pools every year’ were more likely to participate in an electronic waste recycling scheme, a new study has found.
    The paper from the University of Portsmouth explores how to improve our sustainable disposal of electronic waste (e-waste).
    With Christmas around the corner and consumers buying the latest mobile phones, tablets, headphones and televisions, older electronic products become defunct and add to the alarming quantity of potentially toxic e-waste.
    Experts at the University of Portsmouth carried out research to test what factors encourage consumers to safely dispose of e-waste, which will be useful for managers and policy-makers implementing disposal schemes.
    Lead author of the study, Dr Diletta Acuti, from the University’s Faculty of Business and Law, said: “The world’s electronic waste is an enormous problem which needs to be addressed urgently. E-waste often contains hazardous substances, such as mercury, lead or acid, which ends up in landfills without any treatment or special precautions, causing significant long-term damage to the environment and human health.
    “Adequate treatment of this waste is therefore an environmental necessity.”
    In 2019, 205,000 tons of portable batteries were sold in Europe, but only half were collected for recycling. Dr Acuti’s research looks specifically at the disposal of batteries. More

  • in

    Signal processing algorithms improved turbulence in free-space optic tests

    New signal-processing algorithms have been shown to help mitigate the impact of turbulence in free-space optical experiments, potentially bringing ‘free space’ internet a step closer to reality.
    The team of researchers, from Aston University’s Aston Institute of Photonic Technologies and Glasgow University, used commercially available photonic lanterns, a commercial transponder, and a spatial light modulator to emulate turbulence. By applying a successive interference cancellation digital signal processing algorithm, they achieved record results.
    The findings are published in the IEEE Journal of Lightwave Technology.
    Free space optical technology wirelessly transmits data as light through the air around us — called ‘free space’ — for use in telecoms or computer networking. Because free space optical communication doesn’t require the expensive laying of fibre cables, it’s seen as an exciting development in bringing communications to places where there is limited existing infrastructure.
    But because data is sent as pulses of light, weather conditions can cause problems. A bright sunny day or thick fog can diffract or scintillate the beam of light, creating turbulence which causes data to be lost.
    The researchers simultaneously transmitted multiple data signals using different spatially shaped beams of light using a so-called photonic lantern. Turbulence changes the shape of the beams, often losing the signal if only a single simple shape is transmitted and detected, but by detecting light with these shapes using a second lantern, more of the light is collected at the receiver, and the original data can be unscrambled. This can greatly reduce the impact of the atmosphere on the quality of the data received, in a technique known as Multiple-input multiple-output (MIMO) digital signal processing.
    Professor Andrew Ellis at Aston University said: “Using a single beam, when a single beam was transmitted, turbulence similar to a hot sunny day destroyed the signal 50% of the time. By transmitting multiple beams of different shapes through the same telescopes and detecting the different shapes, not only did we increase the availability to more than 99%, we increased the capacity to more than 500 Gbit/s, or more than 500 ultra-fast Pure-Fibre broadband links.”
    A project investigating the real-world applications of FSO technology is presently underway in South Africa, where researchers from Aston University and Glasgow University are working with the University of the Witwatersrand in Johannesburg to attempt to bring internet access to communities living in informal settlements and schools in underprivileged areas.
    The Fibre Before the Fibre Project, aims to provide the internet performance of a Pure-Fibre connection without the need to install cables. It uses a free space optical communication system that can link to remote sites using a wireless optical line of site signal to link to nearby fibre sources in more affluent suburbs.
    Professor Ellis said: “Our role in the project is to look at the impact and educational benefit free space optics will have for the school children who will finally be able to access the internet.”
    Story Source:
    Materials provided by Aston University. Note: Content may be edited for style and length. More

  • in

    Virtual reality game to objectively detect ADHD

    Researchers have used virtual reality games, eye tracking and machine learning to show that differences in eye movements can be used to detect ADHD, potentially providing a tool for more precise diagnosis of attention deficits. Their approach could also be used as the basis for an ADHD therapy, and with some modifications, to assess other conditions, such as autism.
    ADHD is a common attention disorder that affects around six percent of the world’s children. Despite decades of searching for objective markers, ADHD diagnosis is still based on questionnaires, interviews and subjective observation. The results can be ambiguous, and standard behavioural tests don’t reveal how children manage everyday situations. Recently, a team consisting of researchers from Aalto University, the University of Helsinki, and Åbo Akademi University developed a virtual reality game called EPELI that can be used to assess ADHD symptoms in children by simulating situations from everyday life.
    Now, the team tracked the eye movements of children in a virtual reality game and used machine learning to look for differences in children with ADHD. The new study involved 37 children diagnosed with ADHD and 36 children in a control group. The children played EPELI and a second game, Shoot the Target, in which the player is instructed to locate objects in the environment and “shoot” them by looking at them. 
    ‘We tracked children’s natural eye movements as they performed different tasks in a virtual reality game, and this proved to be an effective way of detecting ADHD symptoms. The ADHD children’s gaze paused longer on different objects in the environment, and their gaze jumped faster and more often from one spot to another. This might indicate a delay in visual system development and poorer information processing than other children,’ said Liya Merzon, a doctoral researcher at Aalto University.
    Brushing your teeth with distractions
    Project lead Juha Salmitaival, an Academy Research Fellow at Aalto, explains that part of the game’s strength is its motivational value. ‘This isn’t just a new technology to objectively assess ADHD symptoms. Children also find the game more interesting than standard neuropsychological tests,’ he says. More

  • in

    New software based on Artificial Intelligence helps to interpret complex data

    Experimental data is often not only highly dimensional, but also noisy and full of artefacts. This makes it difficult to interpret the data. Now a team at HZB has designed software that uses self-learning neural networks to compress the data in a smart way and reconstruct a low-noise version in the next step. This enables to recognise correlations that would otherwise not be discernible. The software has now been successfully used in photon diagnostics at the FLASH free electron laser at DESY. But it is suitable for very different applications in science.
    More is not always better, but sometimes a problem. With highly complex data, which have many dimensions due to their numerous parameters, correlations are often no longer recognisable. Especially since experimentally obtained data are additionally disturbed and noisy due to influences that cannot be controlled.
    Helping humans to interpret the data
    Now, new software based on artificial intelligence methods can help: It is a special class of neural networks (NN) that experts call “disentangled variational autoencoder network (β-VAE).” Put simply, the first NN takes care of compressing the data, while the second NN subsequently reconstructs the data. “In the process, the two NNs are trained so that the compressed form can be interpreted by humans,” explains Dr Gregor Hartmann. The physicist and data scientist supervises the Joint Lab on Artificial Intelligence Methods at HZB, which is run by HZB together with the University of Kassel.
    Extracting core principles without prior knowledge
    Google Deepmind had already proposed to use β-VAEs in 2017. Many experts assumed that the application in the real world would be challenging, as non-linear components are difficult to disentangle. “After several years of learning how the NNs learn, it finally worked,” says Hartmann. β-VAEs are able to extract the underlying core principle from data without prior knowledge.
    Photon energy of FLASH determined
    In the study now published, the group used the software to determine the photon energy of FLASH from single-shot photoelectron spectra. “We succeeded in extracting this information from noisy electron time-of-flight data, and much better than with conventional analysis methods,” says Hartmann. Even data with detector-specific artefacts can be cleaned up this way.
    A powerful tool for different problems
    “The method is really good when it comes to impaired data,” Hartmann emphasises. The programme is even able to reconstruct tiny signals that were not visible in the raw data. Such networks can help uncover unexpected physical effects or correlations in large experimental data sets. “AI-based intelligent data compression is a very powerful tool, not only in photon science,” says Hartmann.
    Now plug and play
    In total, Hartmann and his team spent three years developing the software. “But now, it is more or less plug and play. We hope that soon many colleagues will come with their data and we can support them.”
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More