More stories

  • in

    Why big-box chains' embrace of in-store click-and-collect leaves money on the table

    Researchers from University of North Carolina-Chapel Hill and Tilburg University published a new paper in the Journal of Marketing that explores the rise of click-and-collect services and examines their most appropriate settings.
    The study, forthcoming in the Journal of Marketing, is titled “Navigating the Last Mile in Grocery Shopping: The Click and Collect Format” and is authored by Katrijn Gielens, Els Gijsbrechts, and Inge Geyskens.
    Big box stores have spent years developing technology capabilities to compete with Amazon and other digitally savvy competitors. While no one could have foreseen Covid-19, these chains’ investments in click-and-collect technology allowed them to cash in when the coronavirus pushed sales online.
    Order fulfilment is a costly and difficult challenge that must be mastered for online grocery success. The rise of click-and-collect services help overcome this roadblock by having shoppers help fulfill the goods by placing orders online and picking up the goods themselves.
    Click-and-collect services can be offered in different ways and through different formats, all of which come with vastly different levels of investments and cost structures. Not surprisingly, many retailers, rushing into the click-and-collect fray, are opting for the lower cash- and capital-intensive options such as in-store and curbside pickup. However, not all click-and-collect formats offer the same convenience benefits to shoppers. Sales outcomes may therefore widely differ. Retailers may thus want to contemplate how to organize these click-and-collect services in a sustainable and profitable way to safeguard the longer-run success and viability of the format. The study offers advice to retailers on whether and how to implement click-and- collect. To that end, the researchers gauged how shoppers’ online and total spending changes after they start using three different click-and collect formats, specifically: (1) in-store, i.e. pickup at existing stores; 2) near-store, i.e., pickup at outlets adjoining stores, also known as ‘curbside’; and 3) stand-alone click-and-collect, i.e. pickup at free-standing locations.
    Do these click-and-collect types address the same needs? Gielens says the answer is, No! “The different formats address fundamentally different shopper needs in terms of fulfillment convenience.” Fulfillment convenience touches upon three different benefits offered to shoppers:
    Access convenience: the reduction of time to, at, and from a click-and-collect location.
    Collection convenience: the reduction of physical effort to collect the order.

    advertisement

    Adjustment convenience: the ease with which shoppers can adjust their online orders by adding, returning, or replacing items.

    Depending on shoppers’ needs for these different convenience benefits, click-and-collect results in vastly different performance outcomes. This calls for judicious alignment of the right click-and-collect format with local-market needs.
    Overall, does click-and-collect increase shoppers’ online and total spending? The study shows that click-and-collect can be an effective means to boost online spending at the retailer. Hence, click-and-collect may indeed be the long-awaited road to online success for grocery retailers, overcoming the last-mile problems associated with home delivery. Moreover, by blending the convenience benefits of home delivery and brick-and-mortar, click-and-collect can also enhance households’ total spending at the retailer and thus constitute a profitable addition to the retailer’s channel mix.
    What is the best click-and-collect format for access-convenience-oriented markets? In markets with high access-convenience needs, such as rural markets with many weekend shoppers, both in-store and stand-alone click-and-collect do well. The time-efficient pickup of stand-alones stimulates these shoppers to spend more at the retailer online. In-store pickup, in turn, leads to positive spillovers to the retailer’s brick-and-mortar stores and, hence, an increase in total spending.
    What is the best click-and-collect format for collection-convenience-oriented markets? Stand-alone click-and-collects best serve needs in these markets with a predominance of large-basket shoppers buying more bulky items. In these markets, the lower physical shopping effort combined with the time-savings of not having to drive to a regular store, make stand-alones particularly appealing — resulting in the highest extra total spending at the retailer.
    What is the best click-and-collect format for adjustment-convenience-oriented markets? Stand-alone and near-store yield the highest total retailer sales in these markets where larger households that shop more for perishables and buy more on impulse tend to live. While in-store leads shoppers in these markers to spend more online, it also cannibalizes their brick-and-mortar purchases. Even worse, it may even decrease total spending at the retailer and should therefore be avoided.
    What are the key takeaways for practitioners? Gijsbrechts explains that “We provide grocery retailers with insights on how to avoid costly mistakes when kick starting click-and-collect. As retailers race to build click-and-collects, they are mostly opting for fulfillment within existing stores for the sake of quick, low-cost roll-out. Indeed, since in-store click-and-collect can rely on existing infrastructure and processes, it is the easiest to implement. However, the pursuit of speed without knowing which type is best in terms of demand may lead to the demise of the format.” Also, while most retailers tend to opt for one type of click-and-collect across all markets, a one-size-fits-all approach is not advisable. Instead, the impact depends on shoppers’ needs for fulfillment convenience. This study helps retailers find the right mix. More

  • in

    Building a quantum network one node at a time

    Researchers at the University of Rochester and Cornell University have taken an important step toward developing a communications network that exchanges information across long distances by using photons, mass-less measures of light that are key elements of quantum computing and quantum communications systems.
    The research team has designed a nanoscale node made out of magnetic and semiconducting materials that could interact with other nodes, using laser light to emit and accept photons.
    The development of such a quantum network — designed to take advantage of the physical properties of light and matter characterized by quantum mechanics — promises faster, more efficient ways to communicate, compute, and detect objects and materials as compared to networks currently used for computing and communications.
    Described in the journal Nature Communications, the node consists of an array of pillars a mere 120 nanometers high. The pillars are part of a platform containing atomically thin layers of semiconductor and magnetic materials.
    The array is engineered so that each pillar serves as a location marker for a quantum state that can interact with photons and the associated photons can potentially interact with other locations across the device — and with similar arrays at other locations. This potential to connect quantum nodes across a remote network capitalizes on the concept of entanglement, a phenomenon of quantum mechanics that, at its very basic level, describes how the properties of particles are connected at the subatomic level.
    “This is the beginnings of having a kind of register, if you like, where different spatial locations can store information and interact with photons,” says Nick Vamivakas, professor of quantum optics and quantum physics at Rochester.

    advertisement

    Toward ‘miniaturizing a quantum computer’
    The project builds on work the Vamivakas Lab has conducted in recent years using tungsten diselenide (WSe2) in so-called Van der Waals heterostructures. That work uses layers of atomically thin materials on top of each other to create or capture single photons.
    The new device uses a novel alignment of WSe2 draped over the pillars with an underlying, highly reactive layer of chromium triiodide (CrI3). Where the atomically thin, 12-micron area layers touch, the CrI3 imparts an electric charge to the WSe2, creating a “hole” alongside each of the pillars.
    In quantum physics, a hole is characterized by the absence of an electron. Each positively charged hole also has a binary north/south magnetic property associated with it, so that each is also a nanomagnet
    When the device is bathed in laser light, further reactions occur, turning the nanomagnets into individual optically active spin arrays that emit and interact with photons. Whereas classical information processing deals in bits that have values of either 0 or 1, spin states can encode both 0 and 1 at the same time, expanding the possibilities for information processing.

    advertisement

    “Being able to control hole spin orientation using ultrathin and 12-micron large CrI3, replaces the need for using external magnetic fields from gigantic magnetic coils akin to those used in MRI systems,” says lead author and graduate student Arunabh Mukherjee. “This will go a long way in miniaturizing a quantum computer based on single hole spins. ”
    Still to come: Entanglement at a distance?
    Two major challenges confronted the researchers in creating the device.
    One was creating an inert environment in which to work with the highly reactive CrI3. This was where the collaboration with Cornell University came into play. “They have a lot of expertise with the chromium triiodide and since we were working with that for the first time, we coordinated with them on that aspect of it,” Vamivakas says. For example, fabrication of the CrI3 was done in nitrogen-filled glove boxes to avoid oxygen and moisture degradation.
    The other challenge was determining just the right configuration of pillars to ensure that the holes and spin valleys associated with each pillar could be properly registered to eventually link to other nodes.
    And therein lies the next major challenge: finding a way to send photons long distances through an optical fiber to other nodes, while preserving their properties of entanglement.
    “We haven’t yet engineered the device to promote that kind of behavior,” Vamivakas says. “That’s down the road.”

    Story Source:
    Materials provided by University of Rochester. Original written by Bob Marcotte. Note: Content may be edited for style and length. More

  • in

    Research lays groundwork for ultra-thin, energy efficient photodetector on glass

    Though we may not always realize it, photodetectors contribute greatly to the convenience of modern life. Also known as photosensors, photodetectors convert light energy into electrical signals to complete tasks such as opening automatic sliding doors and automatically adjusting a cell phone’s screen brightness in different lighting conditions.
    A new paper, published by a team of Penn State researchers in ACS Nano, seeks to further advance photodetectors’ use by integrating the technology with durable Gorilla glass, the material used for smart phone screens that is manufactured by Corning Incorporated.
    The integration of photodetectors with Gorilla glass could lead to the commercial development of “smart glass,” or glass equipped with automatic sensing properties. Smart glass has a number of applications ranging from imaging to advanced robotics, according to the researchers.
    “There are two problems to address when attempting to manufacture and scale photodetectors on glass,” said principal investigator Saptarshi Das, assistant professor of engineering science and mechanics (ESM).?”It must be done using relatively low temperatures, as the glass degrades at high temperatures, and must ensure the photodetector can operate on glass using minimal energy.”
    To overcome the first challenge, Das, along with ESM doctoral student Joseph R. Nasr, determined that the chemical compound molybdenum disulfide was the best material to use as a coating on the glass.
    Then, Joshua Robinson, professor of materials science and engineering (MatSE) and MatSE doctoral student Nicholas Simonson used a chemical reactor at 600 degrees Celsius — a low enough temperature so as not to degrade the Gorilla glass — to fuse together the compound and glass. The next step was to turn the glass and coating into a photodetector by patterning it using a conventional electron beam lithography tool.
    “We then tested the glass using green LED lighting, which mimics a more natural lighting source unlike laser lighting, which is commonly used in similar optoelectronics research,” Nasr said.
    The ultra-thin body of the molybdenum disulfide photodetectors allows for better electrostatic control, and ensures it can operate with low power — a critical need for the smart glass technology of the future.
    “The photodetectors need to work in resource-constrained or inaccessible locations that by nature do not have access to sources of unrestricted electricity,” Das said. “Therefore, they need to rely on pre-storing their own energy in the form of wind or solar energy.”
    If developed commercially, smart glass could lead to technology advances in wide-ranging sectors of industry including in manufacturing, civil infrastructure, energy, health care, transportation and aerospace engineering, according to the researchers. The technology could be applied in biomedical imaging, security surveillance, environmental sensing, optical communication, night vision, motion detection and collision avoidance systems for autonomous vehicles and robots.
    “Smart glass on car windshields could adapt to oncoming high-beam headlights when driving at night by automatically shifting its opacity using the technology,” Robinson said. “And new Boeing 757 planes could utilize the glass on their windows for pilots and passengers to automatically dim sunlight.”

    Story Source:
    Materials provided by Penn State. Original written by Mariah Chuprinski. Note: Content may be edited for style and length. More

  • in

    Tricking fake news detectors with malicious user comments

    Fake news detectors, which have been deployed by social media platforms like Twitter and Facebook to add warnings to misleading posts, have traditionally flagged online articles as false based on the story’s headline or content. However, recent approaches have considered other signals, such as network features and user engagements, in addition to the story’s content to boost their accuracies.
    However, new research from a team at Penn State’s College of Information Sciences and Technology shows how these fake news detectors can be manipulated through user comments to flag true news as false and false news as true. This attack approach could give adversaries the ability to influence the detector’s assessment of the story even if they are not the story’s original author.
    “Our model does not require the adversaries to modify the target article’s title or content,” explained Thai Le, lead author of the paper and doctoral student in the College of IST. “Instead, adversaries can easily use random accounts on social media to post malicious comments to either demote a real story as fake news or promote a fake story as real news.”
    That is, instead of fooling the detector by attacking the story’s content or source, commenters can attack the detector itself.
    The researchers developed a framework — called Malcom — to generate, optimize, and add malicious comments that were readable and relevant to the article in an effort to fool the detector. Then, they assessed the quality of the artificially generated comments by seeing if humans could differentiate them from those generated by real users. Finally, they tested Malcom’s performance on several popular fake news detectors.
    Malcom performed better than the baseline for existing models by fooling five of the leading neural network based fake news detectors more than 93% of the time. To the researchers’ knowledge, this is the first model to attack fake news detectors using this method.
    This approach could be appealing to attackers because they do not need to follow traditional steps of spreading fake news, which primarily involves owning the content. The researchers hope their work will help those charged with creating fake news detectors to develop more robust models and strengthen methods to detect and filter-out malicious comments, ultimately helping readers get accurate information to make informed decisions.
    “Fake news has been promoted with deliberate intention to widen political divides, to undermine citizens’ confidence in public figures, and even to create confusion and doubts among communities,” the team wrote in their paper, which will be presented virtually during the 2020 IEEE International Conference on Data Mining.
    Added Le, “Our research illustrates that attackers can exploit this dependency on users’ engagement to fool the detection models by posting malicious comments on online articles, and it highlights the importance of having robust fake news detection models that can defend against adversarial attacks.”
    Contributors to the project include Dongwon Lee, associate professor, and Suhang Wang, assistant professor, both in Penn State’s College of Information Sciences and Technology. This work was supported by the National Science Foundation.

    Story Source:
    Materials provided by Penn State. Original written by Jordan Ford. Note: Content may be edited for style and length. More

  • in

    Scientists develop method to detect charge traps in organic semiconductors

    Scientists at Swansea University have developed a very sensitive method to detect the tiny signatures of so called ‘charge traps’ in organic semiconductors.
    The research, published in Nature Communications and supported by the Welsh Government through the European Regional Development Fund, may change views about what limits the performance of organic solar cells, photodetectors and OLEDs.
    Organic semiconductors are materials mainly made of carbon and hydrogen which can be flexible, low weight and colourful.
    They are the key components in OLED displays, solar cells and photodetectors that can distinguish different colours and even mimic the rods and cones of the human eye.
    The efficiency of organic solar cells to convert sunlight to electricity has recently reached 18 % and the race is on to really understand the fundamentals of how they work.
    Lead author Nasim Zarrabi, a PhD student at Swansea University said: “For a long time, we guessed that some charges that are generated by the sunlight can be trapped in the semiconductor layer of the solar cell, but we’ve never really been able to prove it.

    advertisement

    “These traps make solar cells less efficient, photodetectors less sensitive and an OLED TV less bright, so we really need a way to study them and then understand how to avoid them — this is what motivates our work and why these recent findings are so important.”
    Research lead, Dr Ardalan Armin, a Sêr Cymru II Rising Start Fellow commented: “Ordinarily, traps are ‘dead ends’ so to speak; in our study we see them also generating new charges rather than annihilating them completely.
    “We’d predicted this could maybe happen, but until now did not have the experimental accuracy to detect these charges generated via traps.”
    Dr Oskar Sandberg, the theorist behind the work said that he has been waiting for such experimental accuracy for several years.
    “What we observed experimentally has been known in silicon and gallium arsenide as intermediate band solar cells, in organic solar cells it has never been shown that traps can generate charges,” he said.
    “The additional charges generated by the traps is not beneficial for generating lots of electricity because it is very tiny.
    “But it is sufficient to be able to study these effects and maybe find ways to control them in order to make genuine improvements in device performance.”

    Story Source:
    Materials provided by Swansea University. Note: Content may be edited for style and length. More

  • in

    A DNA-based molecular tagging system that could take the place of printed barcodes

    Many people have had the experience of being poked in the back by a plastic tag while trying on clothes in a store. That is just one example of radio frequency identification technology, which has become a mainstay not just in retail but also in manufacturing, logistics, transportation, health care and more. Other tagging systems include the scannable barcode and the QR code.
    Despite their near ubiquity, these object tagging systems have their shortcomings: They may be too large or inflexible for certain applications, they are easily damaged or removed, and they may be impractical to apply in high quantities. But recent advancements in DNA-based data storage and computation offer new possibilities for creating a tagging system that is smaller and lighter than conventional methods.
    That’s the point of Porcupine, a new molecular tagging system introduced by University of Washington and Microsoft researchers. These tags can be programmed and read within seconds using a portable nanopore device. In a new paper published Nov. 3 in Nature Communications, the team describes how dehydrated strands of synthetic DNA can take the place of bulky plastic or printed barcodes. Building on recent developments in DNA sequencing technologies and raw signal processing tools, the team’s inexpensive and user-friendly design forgoes the need for access to specialized labs and equipment.
    “Molecular tagging is not a new idea, but existing methods are still complicated and require access to a lab, which rules out many real-world scenarios,” said lead author Kathryn Doroschak, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “We designed the first portable, end-to-end molecular tagging system that enables rapid, on-demand encoding and decoding at scale, and which is more accessible than existing molecular tagging methods.”
    Instead of radio waves or printed lines, the Porcupine tagging scheme relies on a set of distinct DNA strands called molecular bits, or “molbits” for short, that incorporate highly separable nanopore signals to ease later readout. Each individual molbit comprises one of 96 unique barcode sequences combined with a longer DNA fragment selected from a set of predetermined sequence lengths. Under the Porcupine system, the binary zeros and ones of a digital tag are signified by the presence or absence of each of the 96 molbits.
    “We wanted to prove the concept while achieving a high rate of accuracy, hence the initial 96 barcodes, but we intentionally designed our system to be modular and extensible,” said co-author Karin Strauss, senior principal research manager at Microsoft Research and affiliate professor in the Allen School. “With these initial barcodes, Porcupine can produce roughly 4.2 billion unique tags using basic laboratory equipment without compromising reliability upon readout.”
    Although DNA is notoriously expensive to read and write, Porcupine gets around this by prefabricating the fragments of DNA. In addition to lowering the cost, this approach has the added advantage of enabling users to arbitrarily mix existing strands to quickly and easily create new tags. The molbits are prepared for readout during initial tag assembly and then dehydrated to extend the shelf life of the tags. This approach protects against contamination from other DNA present in the environment while simultaneously reducing readout time later.
    Another advantage of the Porcupine system is that molbits are extremely tiny, measuring only a few hundred nanometers in length. In practical terms, this means each molecular tag is small enough to fit over a billion copies within one square millimeter of an object’s surface. This makes them ideal for keeping tabs on small items or flexible surfaces that aren’t suited to conventional tagging methods. Invisible to the naked eye, the nanoscale form factor also adds another layer of security compared to conventional tags.
    “Unlike existing inventory control methods, DNA tags can’t be detected by sight or touch. Practically speaking, this means they are difficult to tamper with,” said senior author Jeff Nivala, a research scientist at the Allen School. “This makes them ideal for tracking high-value items and separating legitimate goods from forgeries. A system like Porcupine could also be used to track important documents. For example, you could envision molecular tagging being used to track voters’ ballots and prevent tampering in future elections.”
    To read the data in a Porcupine tag, a user rehydrates the tag and runs it through a portable nanopore device. To demonstrate, the researchers encoded and then decoded their lab acronym, “M-I-S-L,” reliably and within a few seconds using the Porcupine system. As advancements in nanopore technologies make them increasingly affordable, the team believes molecular tagging could become an increasingly attractive option in a variety of real-world settings.
    “Porcupine is one more exciting example of a hybrid molecular-electronic system, combining molecular engineering, new sensing technology and machine learning to enable new applications,” said co-author Luis Ceze, a professor in the Allen School. More

  • in

    New AI tool provides much-needed help to protein scientists across the world

    Using artificial intelligence, UCPH researchers have solved a problem that until now has been the stumbling block for important protein research into the dynamics behind diseases such as cancer, Alzheimer’s and Parkinson’s, as well as in the development of sustainable chemistry and new gene-editing technologies.
    It has always been a time-consuming and challenging task to analyse the huge datasets collected by researchers as they used microscopy and the smFRET technique to see how proteins move and interact with their surroundings. At the same time the task required a high level of expertise. Hence, the proliferation of stuffed servers and hard drives. Now researchers at the Department of Chemistry, Nano-Science Center, Novo Nordisk Foundation Center for Protein Research and the Niels Bohr Institute, University of Copenhagen, have developed a machine learning algorithm to do the heavy lifting.
    “We used to sort data until we went loopy. Now our data is analysed at the touch of button. And, the algorithm does it at least as well or better than we can. This frees up resources for us to collect more data than ever before and get faster results,” explains Simon Bo Jensen, a biophysicist and PhD student at the Department of Chemistry and the Nano-Science Center.
    The algorithm has learned to recognize protein movement patterns, allowing it to classify data sets in seconds — a process that typically takes experts several days to accomplish.
    “Until now, we sat with loads of raw data in the form of thousands of patterns. We used to check through it manually, one at a time. In doing so, we became the bottleneck of our own research. Even for experts, conducting consistent work and reaching the same conclusions time and time again is difficult. After all, we’re humans who tire and are prone to error,” says Simon Bo Jensen.
    Just a second’s work for the algorithm
    The studies about the relationship between protein movements and functions conducted by the UCPH researchers is internationally recognized and essential for understanding how the human body functions. For example, diseases including cancer, Alzheimer’s and Parkinson’s are caused by proteins clumping up or changing their behaviour. The gene-editing technology CRISPR, which won the Nobel Prize in Chemistry this year, also relies on the ability of proteins to cut and splice specific DNA sequences. When UCPH researchers like Guillermo Montoya and Nikos Hatzakis study how these processes take place, they make use of microscopy data.

    advertisement

    “Before we can treat serious diseases or take full advantage of CRISPR, we need to understand how proteins, the smallest building blocks, work. This is where protein movement and dynamics come into play. And this is where our tool is of tremendous help,” says Guillermo Montoya, Professor at the Novo Nordisk Foundation Center for Protein Research.
    Attention from around the world
    It appears that protein researchers from around the world have been missing just such a tool. Several international research groups have already presented themselves and shown an interest in using the algorithm.
    “This AI tool is a huge bonus for the field as a whole because it provides common standards, ones that weren’t there before, for when researchers across world need to compare data. Previously, much of the analysis was based on subjective opinions about which patterns were useful. Those can vary from research group to research group. Now, we are equipped with a tool that can ensure we all reach the same conclusions,” explains research director Nikos Hatzakis, Associate Professor at the Department of Chemistry and Affiliate Associate Professor at the Novo Nordisk Foundation Center for Protein Research.
    He adds that the tool offers a different perspective as well:
    “While analysing the choreography of protein movement remains a niche, it has gained more and more ground as the advanced microscopes needed to do so have become cheaper. Still, analysing data requires a high level of expertise. Our tool makes the method accessible to a greater number of researchers in biology and biophysics, even those without specific expertise, whether it’s research into the coronavirus or the development of new drugs or green technologies.” More

  • in

    Students develop tool to predict the carbon footprint of algorithms

    On a daily basis, and perhaps without realizing it, most of us are in close contact with advanced AI methods known as deep learning. Deep learning algorithms churn whenever we use Siri or Alexa, when Netflix suggests movies and tv shows based upon our viewing histories, or when we communicate with a website’s customer service chatbot.
    However, the rapidly evolving technology, one that has otherwise been expected to serve as an effective weapon against climate change, has a downside that many people are unaware of — sky high energy consumption. Artificial intelligence, and particularly the subfield of deep learning, appears likely to become a significant climate culprit should industry trends continue. In only six years — from 2012 to 2018 — the compute needed for deep learning has grown 300,000%. However, the energy consumption and carbon footprint associated with developing algorithms is rarely measured, despite numerous studies that clearly demonstrate the growing problem.
    In response to the problem, two students at the University of Copenhagen’s Department of Computer Science, Lasse F. Wolff Anthony and Benjamin Kanding, together with Assistant Professor Raghavendra Selvan, have developed a software programme they call Carbontracker. The programme can calculate and predict the energy consumption and CO2 emissions of training deep learning models.
    “Developments in this field are going insanely fast and deep learning models are constantly becoming larger in scale and more advanced. Right now, there is exponential growth. And that means an increasing energy consumption that most people seem not to think about,” according to Lasse F. Wolff Anthony.
    One training session = the annual energy consumption of 126 Danish homes
    Deep learning training is the process during which the mathematical model learns to recognize patterns in large datasets. It’s an energy-intensive process that takes place on specialized, power-intensive hardware running 24 hours a day.

    advertisement

    “As datasets grow larger by the day, the problems that algorithms need to solve become more and more complex,” states Benjamin Kanding.
    One of the biggest deep learning models developed thus far is the advanced language model known as GPT-3. In a single training session, it is estimated to use the equivalent of a year’s energy consumption of 126 Danish homes, and emit the same amount of CO2 as 700,000 kilometres of driving.
    “Within a few years, there will probably be several models that are many times larger,” says Lasse F. Wolff Anthony.
    Room for improvement
    “Should the trend continue, artificial intelligence could end up being a significant contributor to climate change. Jamming the brakes on technological development is not the point. These developments offer fantastic opportunities for helping our climate. Instead, it is about becoming aware of the problem and thinking: How might we improve?” explains Benjamin Kanding.
    The idea of Carbontracker, which is a free programme, is to provide the field with a foundation for reducing the climate impact of models. Among other things, the programme gathers information on how much CO2 is used to produce energy in whichever region the deep learning training is taking place. Doing so makes it possible to convert energy consumption into CO2 emission predictions.
    Among their recommendations, the two computer science students suggest that deep learning practitioners look at when their model trainings take place, as power is not equally green over a 24-hour period, as well as what type of hardware and algorithms they deploy.
    “It is possible to reduce the climate impact significantly. For example, it is relevant if one opts to train their model in Estonia or Sweden, where the carbon footprint of a model training can be reduced by more than 60 times thanks to greener energy supplies. Algorithms also vary greatly in their energy efficiency. Some require less compute, and thereby less energy, to achieve similar results. If one can tune these types of parameters, things can change considerably,” concludes Lasse F. Wolff Anthony. More