More stories

  • in

    Time to rethink predicting pandemic infection rates?

    During the first months of the COVID-19 pandemic, Joseph Lee McCauley, a physics professor at the University of Houston, was watching the daily data for six countries and wondered if infections were really growing exponentially. By extracting the doubling times from the data, he became convinced they were.
    Doubling times and exponential growth go hand in hand, so it became clear to him that modeling based on past infections is impossible, because the rate changes unforeseeably from day to day due to social distancing and lockdown efforts. And the rate changes differ for each country based on the extent of their social distancing.
    In AIP Advances, from AIP Publishing, McCauley explains how he combined math in the form of Tchebychev’s inequality with a statistical ensemble to understand how macroscopic exponential growth with different daily rates arise from person-to-person disease infection.
    “Discretized ordinary chemical kinetic equations applied to infected, uninfected, and recovered parts of the population allowed me to organize the data, so I could separate the effects of social distancing and recoveries within daily infection rates,” McCauley said.
    Plateauing without peaking occurs if the recovery rate is too low, and the U.S., U.K., and Sweden fall into that category. Equations cannot be iterated to look into the future, because tomorrow’s rate is unknown until it unfolds.
    “Modelers tend to misapply the chemical kinetic equations as SIR (Susceptible, Infectious, or Recovered) or SEIR (Susceptible, Exposed, Infectious, or Recovered) models, because they are trying to generate future rates from past rates,” McCauley said. “But the past doesn’t allow you to use equations to predict the future in a pandemic, because social distancing changes the rates daily.”
    McCauley discovered he could make a forecast within five seconds via hand calculator that is as good as any computer model by simply using infection rates for today and yesterday.
    “Lockdowns and social distancing work,” said McCauley. “Compare Austria, Germany, Taiwan, Denmark, Finland, and several other countries that peaked in early April, with the U.S., U.K., Sweden, and others with no lockdown or half-hearted lockdowns — they’ve never even plateaued, much less peaked.”
    He stresses that forecasting cannot foresee peaking or even plateauing. Plateauing does not imply peaking, and if peaking occurs, there is nothing in the data to show when it will happen. It happens when the recovery rate is greater than the rate of new infections.
    “Social distancing and lockdowns reduce the infection rate but can’t cause peaking,” McCauley said. “Social distancing and recoveries are two separate terms within the daily kinetic rate equations.”
    The implication of this work is that research money could be better spent than on expensive epidemic modeling.
    “Politicians should know enough arithmetic to be given instruction on the implications,” McCauley said. “The effect of lockdowns and social distancing show up in the observed doubling times, and there is also a predicted doubling time based on two days, which serves as a good forecast of the future.” More

  • in

    In a pandemic, migration away from dense cities more effective than closing borders

    Pandemics are fueled, in part, by dense populations in large cities where networks of buildings, crowded sidewalks, and public transportation force people into tighter conditions. This contrasts with conditions in rural areas, where there is more space available per person.
    According to common sense, being in less crowded areas during a pandemic is safer. But small town mayors want to keep people safe, too, and migration of people from cities to rural towns brings concerns. During the COVID-19 pandemic, closing national borders and borders between states and regions has been prevalent. But does it really help?
    In a paper published in Chaos, by AIP Publishing, two researchers decided to put this hypothesis to the test and discover if confinement and travels bans are really effective ways to limit the spread of a pandemic disease. Specifically, they focused on the movement of people from larger cities to smaller ones and tested the results of this one-way migration.
    “Instead of taking mobility, or the lack of mobility, for granted, we decided to explore how an altered mobility would affect the spreading,” author Massimiliano Zanin said. “The real answer lies in the sign of the result. People always assume that closing borders is good. We found that it is almost always bad.”
    The model used by the authors is simplified, without many of the details that affect migration patterns and disease spread. But their focus on changes in population density indicates travel bans might be less effective than migration of people to less dense areas. The result was reduced spread of disease.
    Zanin and collaborator David Papo placed a hypothetical group of people in two locations and assumed their travel was in random movement patterns. They used SIR dynamics, which is common in epidemiological studies of disease movement. SIR stands for susceptible, infected, and recovered — classifications used to label groups in a simulation and track disease spread according to their interactions.
    They ran 10,000 iterations of the simulation to determine the resulting disease spread among people in two locations when migration is one way: from dense cities to less dense towns. They also studied the effect of “forced migration,” which moves healthy people out of dense cities at the onset of a pandemic.
    The results showed that while movement from big cities to small towns might be slightly less safe for the people in small towns, overall, for a global pandemic situation, this reduction in the density of highly populated areas is better for the majority of all people.
    “Collaboration between different governments and administrations is an essential ingredient towards controlling a pandemic, and one should consider the possibility of small-scale sacrifices to reach a global benefit,” Zanin said.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Quantum algorithm breakthrough

    Researchers led by City College of New York physicist Pouyan Ghaemi report the development of a quantum algorithm with the potential to study a class of many-electron quantums system using quantum computers. Their paper, entitled “Creating and Manipulating a Laughlin-Type ?=1/3 Fractional Quantum Hall State on a Quantum Computer with Linear Depth Circuits,” appears in the December issue of PRX Quantum, a journal of the American Physical Society.
    “Quantum physics is the fundamental theory of nature which leads to formation of molecules and the resulting matter around us,” said Ghaemi, assistant professor in CCNY’s Division of Science. “It is already known that when we have a macroscopic number of quantum particles, such as electrons in the metal, which interact with each other, novel phenomena such as superconductivity emerge.”
    However, until now, according to Ghaemi, tools to study systems with large numbers of interacting quantum particles and their novel properties have been extremely limited.
    “Our research has developed a quantum algorithm which can be used to study a class of many-electron quantum systems using quantum computers. Our algorithm opens a new venue to use the new quantum devices to study problems which are quite challenging to study using classical computers. Our results are new and motivate many follow up studies,” added Ghaemi.
    On possible applications for this advancement, Ghaemi, who’s also affiliated with the Graduate Center, CUNY noted: “Quantum computers have witnessed extensive developments during the last few years. Development of new quantum algorithms, regardless of their direct application, will contribute to realize applications of quantum computers.
    “I believe the direct application of our results is to provide tools to improve quantum computing devices. Their direct real-life application would emerge when quantum computers can be used for daily life applications.”
    His collaborators included scientists from: Western Washington University, University of California, Santa Barbara; Google AI Quantum and the University of Michigan, Ann Arbor.

    Story Source:
    Materials provided by City College of New York. Note: Content may be edited for style and length. More

  • in

    System brings deep learning to 'internet of things' devices

    Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new — and much smaller — places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the “internet of things” (IoT).
    The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.
    The research will be presented at next month’s Conference on Neural Information Processing Systems. The lead author is Ji Lin, a PhD student in Song Han’s lab in MIT’s Department of Electrical Engineering and Computer Science. Co-authors include Han and Yujun Lin of MIT, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuang Gan of the MIT-IBM Watson AI Lab.
    The Internet of Things
    The IoT was born in the early 1980s. Grad students at Carnegie Mellon University, including Mike Kazar ’78, connected a Cola-Cola machine to the internet. The group’s motivation was simple: laziness. They wanted to use their computers to confirm the machine was stocked before trekking from their office to make a purchase. It was the world’s first internet-connected appliance. “This was pretty much treated as the punchline of a joke,” says Kazar, now a Microsoft engineer. “No one expected billions of devices on the internet.”
    Since that Coke machine, everyday objects have become increasingly networked into the growing IoT. That includes everything from wearable heart monitors to smart fridges that tell you when you’re low on milk. IoT devices often run on microcontrollers — simple computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are difficult to run locally on IoT devices. For complex analysis, IoT-collected data is often sent to the cloud, making it vulnerable to hacking.

    advertisement

    “How do we deploy neural nets directly on these tiny devices? It’s a new research area that’s getting very hot,” says Han. “Companies like Google and ARM are all working in this direction.” Han is too.
    With MCUNet, Han’s group codesigned two components needed for “tiny deep learning” — the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component: TinyNAS, a neural architecture search algorithm.
    System-algorithm codesign
    Designing a deep network for microcontrollers isn’t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. While the method works, it’s not the most efficient. “It can work pretty well for GPUs or smartphones,” says Lin. “But it’s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”
    So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.” The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller — with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin.

    advertisement

    To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.
    The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.
    MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images — the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. “Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”
    The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual “wake-word” tasks, where a user initiates an interaction with a computer using vocal cues (think: “Hey, Siri”) or simply by entering a room. The experiments highlight MCUNet’s adaptability to numerous applications.
    “Huge potential”
    The promising test results give Han hope that it will become the new industry standard for microcontrollers. “It has huge potential,” he says.
    The advance “extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,” says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.”
    MCUNet could also make IoT devices more secure. “A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”
    Analyzing data locally reduces the risk of personal information being stolen — including personal health data. Han envisions smart watches with MCUNet that don’t just sense users’ heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information. MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.
    Plus, MCUNet’s slim computing footprint translates into a slim carbon footprint. “Our big dream is for green AI,” says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy. “Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,” says Han. More

  • in

    Order from chaos: Seemingly random photonic crystals greatly improve laser scanning

    Scanning lasers — from barcode scanners at the supermarket to cameras on newer smartphones — are an indispensable part of our daily lives, relying on lasers and detectors for pinpoint precision.
    Distance and object recognition using LiDAR — a portmanteau of light and radar — is becoming increasingly common: reflected laser beams record the surrounding environment, providing crucial data for autonomous cars, agricultural machines, and factory robots.
    Current technology bounces the laser beams off of moving mirrors, a mechanical method that results in slower scanning speeds and inaccuracies, not to mention the large physical size and complexity of devices housing a laser and mirrors.
    Publishing in Nature Communications, a research team from Kyoto University’s Graduate School of Engineering describe a new beam scanning device utilizing ‘photonic crystals’, eliminating the need for moving parts.
    Instead of arranging the lattice points of the crystals in an orderly array, the researchers found that varying the lattice points’ shapes and positions caused the laser beam to be emitted in unique directions.
    “What results is a lattice of photonic crystals that looks like a slab of Swiss cheese, where each crystal is calculated to emit the beam in a specific direction,” explains Susumu Noda, who led the team.

    advertisement

    “By eliminating mechanical mirrors, we’ve made a faster and more reliable beam-scanning device.”
    Photonic crystal lasers are a type of ‘semiconductor laser’ whose lattice points can be regarded as nanoscale antennae, which can be arranged to cause a laser beam to be emitted perpendicularly from the surface. But initially the beam would only go in a single direction on a two-dimensional plane; the team needed more area to be covered.
    Arranging the antennae positions cyclically resulted in a successful direction change, but a decrease in power output and deformed shape made this solution unviable.
    “Modulating the antennae positions caused light emitted from adjacent antennae to cancel each other out,” continues Noda, “leading us to try changing antenna sizes.”
    “Eventually, we discovered that adjusting both position and size resulted in a seemingly random photonic crystal, producing an accurate beam without power loss. We called this a ‘dually modulated photonic crystal’.”
    By organizing these crystals — each designed to emit a beam in a unique direction — in a matrix, the team was able to build a compact, switchable, two-dimensional beam scanner without the need for any mechanical parts.
    The scientists have successfully constructed a scanner that can generate beams in one hundred different directions: a resolution of 10×10. This has also been combined with a diverging laser beam, resulting in a new type of LiDAR with enhanced scope to detect objects.
    The team estimates that with further refinements, the resolution could be increased by a factor of 900: up to a 300×300 resolution range.
    “At first there was a great deal of interest in whether a structure that is seemingly so random could actually work,” concludes Noda. “We now believe that eventually we will be able to develop a LiDAR system small enough to hold on a fingertip.”

    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    New green materials could power smart devices using ambient light

    We are increasingly using more smart devices like smartphones, smart speakers, and wearable health and wellness sensors in our homes, offices, and public buildings. However, the batteries they use can deplete quickly and contain toxic and rare environmentally damaging chemicals, so researchers are looking for better ways to power the devices.
    One way to power them is by converting indoor light from ordinary bulbs into energy, in a similar way to how solar panels harvest energy from sunlight, known as solar photovoltaics. However, due to the different properties of the light sources, the materials used for solar panels are not suitable for harvesting indoor light.
    Now, researchers from Imperial College London, Soochow University in China, and the University of Cambridge have discovered that new green materials currently being developed for next-generation solar panels could be useful for indoor light harvesting. They report their findings today in Advanced Energy Materials.
    Co-author Dr Robert Hoye, from the Department of Materials at Imperial, said: “By efficiently absorbing the light coming from lamps commonly found in homes and buildings, the materials we investigated can turn light into electricity with an efficiency already in the range of commercial technologies. We have also already identified several possible improvements, which would allow these materials to surpass the performance of current indoor photovoltaic technologies in the near future.”
    The team investigated ‘perovskite-inspired materials’, which were created to circumvent problems with materials called perovskites, which were developed for next-generation solar cells. Although perovskites are cheaper to make than traditional silicon-based solar panels and deliver similar efficiency, perovskites contain toxic lead substances. This drove the development of perovskite-inspired materials, which are instead based on safer elements like bismuth and antimony.
    Despite being more environmentally friendly, these perovskite-inspired materials are not as efficient at absorbing sunlight. However, the team found that the materials are much more effective at absorbing indoor light, with efficiencies that are promising for commercial applications. Crucially, the researchers demonstrated that the power provided by these materials under indoor illumination is already sufficient to operate electronic circuits.
    Co-author Professor Vincenzo Pecunia, from Soochow University, said: “Our discovery opens up a whole new direction in the search for green, easy-to-make materials to sustainably power our smart devices.
    “In addition to their eco-friendly nature, these materials could potentially be processed onto unconventional substrates such as plastics and fabric, which are incompatible with conventional technologies. Therefore, lead-free perovskite-inspired materials could soon enable battery-free devices for wearables, healthcare monitoring, smart homes, and smart cities.”

    Story Source:
    Materials provided by Imperial College London. Original written by Hayley Dunning. Note: Content may be edited for style and length. More

  • in

    Computer vision app allows easier monitoring of diabetes

    A computer vision technology developed by University of Cambridge engineers has now been developed into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.
    The app uses computer vision techniques to read and record the glucose levels, time and date displayed on a typical glucose test via the camera on a mobile phone. The technology, which doesn’t require an internet or Bluetooth connection, works for any type of glucose meter, in any orientation and in a variety of light levels. It also reduces waste by eliminating the need to replace high-quality non-Bluetooth meters, making it a cost-effective solution to the NHS.
    Working with UK glucose testing company GlucoRx, the Cambridge researchers have developed the technology into a free mobile phone app, called GlucoRx Vision, which is now available on the Apple App Store and Google Play Store.
    To use the app, users simply take a picture of their glucose meter and the results are automatically read and recorded, allowing much easier monitoring of blood glucose levels.
    In addition to the glucose meters which people with diabetes use on a daily basis, many other types of digital meters are used in the medical and industrial sectors. However, many of these meters still do not have wireless connectivity, so connecting them to phone tracking apps often requires manual input.
    “These meters work perfectly well, so we don’t want them sent to landfill just because they don’t have wireless connectivity,” said Dr James Charles from Cambridge’s Department of Engineering. “We wanted to find a way to retrofit them in an inexpensive and environmentally-friendly way using a mobile phone app.”
    In addition to his interest in solving the challenge from an engineering point of view, Charles also had a personal interest in the problem. He has type 1 diabetes and needs to take as many as ten glucose readings per day. Each reading is then manually entered into a tracking app to help determine how much insulin he needs to regulate his blood glucose levels.

    advertisement

    “From a purely selfish point of view, this was something I really wanted to develop,” he said.
    “We wanted something that was efficient, quick and easy to use,” said Professor Roberto Cipolla, also from the Department of Engineering. “Diabetes can affect eyesight or even lead to blindness, so we needed the app to be easy to use for those with reduced vision.”
    The computer vision technology behind the GlucoRx app is made up of two steps. First, the screen of the glucose meter is detected. The researchers used a single training image and augmented it with random backgrounds, particularly backgrounds with people. This helps ensure the system is robust when the user’s face is reflected in the phone’s screen.
    Second, a neural network called LeDigit detects each digit on the screen and reads it. The network is trained with computer-generated synthetic data, avoiding the need for labour-intensive labelling of data which is commonly needed to train a neural network.
    “Since the font on these meters is digital, it’s easy to train the neural network to recognise lots of different inputs and synthesise the data,” said Charles. “This makes it highly efficient to run on a mobile phone.”
    “It doesn’t matter which orientation the meter is in — we tested it in all types of orientations, viewpoints and light levels,” said Cipolla, who is also a Fellow of Jesus College. “The app will vibrate when it’s read the information, so you get a clear signal when you’ve done it correctly. The system is accurate across a range of different types of meters, with read accuracies close to 100%”

    advertisement

    In addition to blood glucose monitor, the researchers also tested their system on different types of digital meters, such as blood pressure monitors, kitchen and bathroom scales. The researchers also recently presented their results at the 31st British Machine Vision Conference.
    Gluco-Rx initially approached Cipolla’s team in 2018 to develop a cost-effective and environmentally-friendly solution to the problem of non-connected glucose meters, and once the technology had been shown to be sufficiently robust, the company worked with the Cambridge researchers to develop the app.
    “We have been working in partnership with Cambridge University on this unique solution, which will help change the management of diabetes for years to come,” said Chris Chapman, Chief Operating Officer of GlucoRx. “We will soon make this solution available to all of our more than 250,000 patients.”
    As for Charles, who has been using the app to track his glucose levels, he said it “makes the whole process easier. I’ve now forgotten what it was like to enter the values in manually, but I do know I wouldn’t want to go back to it. There are a few areas in the system which could still be made even better, but all in all I’m very happy with the outcome.” More