More stories

  • in

    Scientists adopt deep learning for multi-object tracking

    Implementing algorithms that can simultaneously track multiple objects is essential to unlock many applications, from autonomous driving to advanced public surveillance. However, it is difficult for computers to discriminate between detected objects based on their appearance. Now, researchers at the Gwangju Institute of Science and Technology (GIST) have adapted deep learning techniques in a multi-object tracking framework, overcoming short-term occlusion and achieving remarkable performance without sacrificing computational speed.
    Computer vision has progressed much over the past decade and made its way into all sorts of relevant applications, both in academia and in our daily lives. There are, however, some tasks in this field that are still extremely difficult for computers to perform with acceptable accuracy and speed. One example is object tracking, which involves recognizing persistent objects in video footage and tracking their movements. While computers can simultaneously track more objects than humans, they usually fail to discriminate the appearance of different objects. This, in turn, can lead to the algorithm to mix up objects in a scene and ultimately produce incorrect tracking results.
    At the Gwangju Institute of Science and Technology in Korea, a team of researchers led by Professor Moongu Jeon seeks to solve these issues by incorporating deep learning techniques into a multi-object tracking framework. In a recent study published in Information Sciences, they present a new tracking model based on a technique they call ‘deep temporal appearance matching association (Deep-TAMA)’ which promises innovative solutions to some of the most prevalent problems in multi-object tracking. This paper was made available online in October 2020 and was published in volume 561 of the journal in June 2021.
    Conventional tracking approaches determine object trajectories by associating a bounding box to each detected object and establishing geometric constraints. The inherent difficulty in this approach is in accurately matching previously tracked objects with objects detected in the current frame. Differentiating detected objects based on hand-crafted features like color usually fails because of changes in lighting conditions and occlusions. Thus, the researchers focused on enabling the tracking model with the ability to accurately extract the known features of detected objects and compare them not only with those of other objects in the frame but also with a recorded history of known features. To this end, they combined joint-inference neural networks (JI-Nets) with long-short-term-memory networks (LSTMs).
    LSTMs help to associate stored appearances with those in the current frame whereas JI-Nets allow for comparing the appearances of two detected objects simultaneously from scratch — one of the most unique aspects of this new approach. Using historical appearances in this way allowed the algorithm to overcome short-term occlusions of the tracked objects. “Compared to conventional methods that pre-extract features from each object independently, the proposed joint-inference method exhibited better accuracy in public surveillance tasks, namely pedestrian tracking,” highlights Dr. Jeon. Moreover, the researchers also offset a main drawback of deep learning — low speed — by adopting indexing-based GPU parallelization to reduce computing times. Tests on public surveillance datasets confirmed that the proposed tracking framework offers state-of-the-art accuracy and is therefore ready for deployment.
    Multi-object tracking unlocks a plethora of applications ranging from autonomous driving to public surveillance, which can help combat crime and reduce the frequency of accidents. “We believe our methods can inspire other researchers to develop novel deep-learning-based approaches to ultimately improve public safety,” concludes Dr. Jeon. For everyone’s sake, let us hope their vision soon becomes a reality!
    Story Source:
    Materials provided by GIST (Gwangju Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    The mathematics of repulsion for new graphene catalysts

    A new mathematical model helps predict the tiny changes in carbon-based materials that could yield interesting properties.
    Scientists at Tohoku University and colleagues in Japan have developed a mathematical model that abstracts the key effects of changes to the geometries of carbon material and predicts its unique properties.
    The details were published in the journal Carbon.
    Scientists generally use mathematical models to predict the properties that might emerge when a material is changed in certain ways. Changing the geometry of three-dimensional (3D) graphene, which is made of networks of carbon atoms, by adding chemicals or introducing topological defects, can improve its catalytic properties, for example. But it has been difficult for scientists to understand why this happens exactly.
    The new mathematical model, called standard realization with repulsive interaction (SRRI), reveals the relationship between these changes and the properties that arise from them. It does this using less computational power than the typical model employed for this purpose, called density functional theory (DFT), but it is less accurate.
    With the SRRI model, the scientists have refined another existing model by showing the attractive and repulsive forces that exist between adjacent atoms in carbon-based materials. The SRRI model also takes into account two types of curvature in such materials: local curvatures and mean curvature.
    The researchers, led by Tohoku University mathematician Motoko Kotani, used their model to predict the catalytic properties that would arise when local curvatures and dopants were introduced into 3D graphene. Their results were similar to those produced by the DFT model.
    “The accuracy of the SRRI model showed a qualitative agreement with DFT calculations, and is able to screen through potential materials roughly one billion times faster than DFT,” says Kotani.
    The team next fabricated the material and determined its properties using scanning electrochemical cell microscopy. This method can show a direct link between the material’s geometry and its catalytic activity. It revealed that the catalytically active sites are on the local curvatures.
    “Our mathematical model can be used as an effective pre-screening tool for exploring new 2D and 3D carbon materials for unique properties before applying DFT modelling,” says Kotani. “This shows the importance of mathematics in accelerating material design.”
    The team next plans to use their model to look for links between the design of a material and its mechanical and electron transport properties.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Projecting bond properties with machine learning

    Designing materials that have the necessary properties to fulfill specific functions is a challenge faced by researchers working in areas from catalysis to solar cells. To speed up development processes, modeling approaches can be used to predict information to guide refinements. Researchers from The University of Tokyo Institute of Industrial Science have developed a machine learning model to determine characteristics of bonded and adsorbed materials based on parameters of the individual components. Their findings are published in Applied Physics Express.
    Factors such as the length and strength of bonds in materials play crucial roles in determining the structures and properties we experience on the macroscopic scale. The ability to easily predict these characteristics is therefore valuable when designing new materials.
    The density of states (DOS) is a parameter that can be calculated for individual atoms, molecules, and materials. Put simply, it describes the options available to the electrons that arrange themselves in a material. A modeling approach that can take this information for selected components and produce useful data for the desired product — with no need to make and analyze the material — is an attractive tool.
    The researchers used a machine learning approach — where the model refines its response without human intervention — to predict four different properties of products from the DOS information of the individual components. Although the DOS has been used as a descriptor to establish single parameters before, this is the first time multiple different properties have been predicted.
    “We were able to quantitatively predict the binding energy, bond length, number of covalent electrons, and the Fermi energy after bonding for three different general types of system,” explains study first author Eiki Suzuki. “And our predictions were very accurate across all of the properties.”
    Because the calculation of DOS of an isolated state is less complex than for bonded systems, the analysis is relatively efficient. In addition, the neural network model used performed well even when only 20% of the dataset was used for training.
    “A significant advantage of our model is that it is general and can be applied to a wide variety of systems,” study corresponding author Teruyasu Mizoguchi explains. “We believe that our findings could make a significant contribution to numerous development processes, for example in catalysis, and could be particularly useful in newer research areas such as nano clusters and nanowires.”
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Mathematical models and computer simulations are the new frontiers in COVID-19 drug trials

    Researchers are using computer models to simulate COVID-19 infections on a cellular level — the basic structural level of the human body.
    The models allow for virtual trials of drugs and vaccines, opening the possibility of pre-assessment for drug and vaccine efficacy against the virus.
    The research team at the University of Waterloo includes Anita Layton, professor of applied mathematics and Canada 150 Research Chair in mathematical biology and medicine, and Mehrshad Sadria, an applied mathematics PhD student.
    The team uses “in silico” experiments to replicate how the human immune system deals with the COVID-19 virus. In silico refers to trials situated in the silicon of computer chips, as opposed to “in vitro” or “in vivo” experiments, situated in test tubes or directly in living organisms.
    “It’s not that in-silico trials should replace clinical trials,” Layton said. “A model is a simplification, but it can help us whittle down the drugs for clinical trials. Clinical trials are expensive and can cost human lives. Using models helps narrow the drug candidates to the ones that are best for safety and efficacy.”
    The researchers, one of the first groups to be working on these models, were able to capture the results of different treatments that were used on COVID-19 patients in clinical trials. Their results are remarkably consistent with live data on COVID infections and treatments.
    One example of a treatment used in the model was Remdesivir, a drug that was used in the World Health Organization’s global “solidarity” trials. The simulated model and the live trial both showed the drug to be biologically effective but clinically questionable, unless administered shortly after viral infection.
    The model might also work for current and future variants of concern. The researchers anticipate the virus will continue to undergo mutation, which could precipitate new waves of infection.
    “As we learn more about different variants of concern, we can change the model’s structure or parameters to simulate the interaction between the immune system and the variants,” Sadria said. “And we can then predict if we should apply the same treatments or even how the vaccines might work as well.”
    Layton and Sadria are part of a new team, led by researchers at the University Health Network (UHN), which recently received a rapid response grant from the Canadian Institute of Health Research on COVID variants.
    The UHN team will conduct experimental studies and modeling simulations to understand the spread of COVID variants in Canada.
    The study, “Modeling within-Host SARS-CoV-2 Infection Dynamics and Potential Treatments,” authored by Sadria and Layton, was recently published in the journal Viruses.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Climate change may be leading to overcounts of endangered bonobos

    Climate change is interfering with how researchers count bonobos, possibly leading to gross overestimates of the endangered apes, a new study suggests.

    Like other great apes, bonobos build elevated nests out of tree branches and foliage to sleep in. Counts of these nests can be used to estimate numbers of bonobos — as long as researchers have a good idea of how long a nest sticks around before it’s broken down by the environment, what’s known as the nest decay time.

    New data on rainfall and bonobo nests show that the nests are persisting longer in the forests in Congo, from roughly 87 days, on average, in 2003–2007 to about 107 days in 2016–2018, largely as a result of declining precipitation. This increase in nests’ decay time could be dramatically skewing population counts of the endangered apes and imperiling conservation efforts, researchers report June 30 in PLOS ONE.

    “Imagine going in that forest … you count nests, but each single nest is around longer than it used to be 15 years ago, which means that you think that there are more bonobos than there really are,” says Barbara Fruth, a behavioral ecologist at the Max Planck Institute of Animal Behavior in Konstanz, Germany.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!
    There was a problem signing you up.

    Lowland tropical forests, south of the Congo River in Africa, are the only place in the world where bonobos (Pan paniscus) still live in the wild (SN: 3/18/21). Estimates suggest that there are at least 15,000 to 20,000 bonobos there. But there could be as many as 50,000 individuals. “The area of potential distribution is rather big, but there have been very few surveys,” Fruth says.

    From 2003 to 2007, and then again from 2016 to 2018, Fruth and colleagues followed wild bonobos in Congo’s LuiKotale rain forest, monitoring 1,511 nests. “The idea is that you follow [the bonobos] always,” says Mattia Bessone, a wildlife researcher at the Liverpool John Moores University in England. “You need to be up early in the morning so that you can be at the spot where the bonobos have nested, in time for them to wake up, and then you follow them till they nest again.”

    In doing so, day after day, Fruth, Bessone and colleagues were first able to understand how many nests a bonobo builds in a day, what’s known as the nest construction rate. “It’s not necessarily one because sometimes bonobos build day nests,” Bessone says. On average, each bonobo builds 1.3 nests per day, the team found.

    Tracking how long these nests stuck around revealed that the structures were lasting an average of 19 days longer in 2016–2018 than in 2003–2007. The researchers also compiled fifteen years of climate data for LuiKotale, which showed a decrease in average rainfall from 2003 to 2018. That change in rain is linked to climate change, the researchers say, and helps explain why nests have become more resilient.

    These images show bonobo nests at different stages of decay. Knowing the time it takes for a nest to decay is crucial for estimating accurate bonobo numbers.© B. Fruth/MPI of Animal Behavior

    By counting the numbers of nests and then dividing that number by the product of the average nest decay time and nest construction rate, scientists can get an estimate of the number of bonobos in a region. But if researchers are using outdated, shorter nest decay times, those estimates could be severely off, overestimating bonobo counts by up to 50 percent, Bessone says.

    “The results are not surprising but also highlight how indirect (and therefore prone to errors) our methods of density estimates of many species are,” Martin Surbeck, a behavioral ecologist at Harvard University, wrote in an e-mail.

    Technologies such as camera traps can be used to directly count animals instead of using proxies like nests and are the way forward for animal population studies, researchers say. But until those methods become more common, nest counts remain vital for scientists’ understanding of bonobo numbers.

    This phenomenon is probably not limited to bonobos. All great apes build nests, and nest counts are used to estimate those animals’ numbers too. So, the researchers say, the new results could have implications for the conservation of primates far beyond bonobos. More

  • in

    Scientists create tool to explore billions of social media messages, potentially predict political and financial turmoil

    For thousands of years, people looked into the night sky with their naked eyes — and told stories about the few visible stars. Then we invented telescopes. In 1840, the philosopher Thomas Carlyle claimed that “the history of the world is but the biography of great men.” Then we started posting on Twitter.
    Now scientists have invented an instrument to peer deeply into the billions and billions of posts made on Twitter since 2008 — and have begun to uncover the vast galaxy of stories that they contain.
    “We call it the Storywrangler,” says Thayer Alshaabi, a doctoral student at the University of Vermont who co-led the new research. “It’s like a telescope to look — in real time — at all this data that people share on social media. We hope people will use it themselves, in the same way you might look up at the stars and ask your own questions.”
    The new tool can give an unprecedented, minute-by-minute view of popularity, from rising political movements to box office flops; from the staggering success of K-pop to signals of emerging new diseases.
    The story of the Storywrangler — a curation and analysis of over 150 billion tweets — and some of its key findings were published on July 16 in the journal Science Advances.
    EXPRESSIONS OF THE MANY
    The team of eight scientists who invented Storywrangler — from the University of Vermont, Charles River Analytics, and MassMutual Data Science — gather about ten percent of all the tweets made every day, around the globe. For each day, they break these tweets into single bits, as well as pairs and triplets, generating frequencies from more than a trillion words, hashtags, handles, symbols and emoji, like “Super Bowl,” “Black Lives Matter,” “gravitational waves,” “#metoo,” “coronavirus,” and “keto diet.” More

  • in

    Enabling the 'imagination' of artificial intelligence

    Imagine an orange cat. Now, imagine the same cat, but with coal-black fur. Now, imagine the cat strutting along the Great Wall of China. Doing this, a quick series of neuron activations in your brain will come up with variations of the picture presented, based on your previous knowledge of the world.
    In other words, as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.”
    Now, a USC research team has developed an AI that uses human-like capabilities to imagine a never-before-seen object with different attributes. The paper, titled Zero-Shot Synthesis with Group-Supervised Learning, was published in the 2021 International Conference on Learning Representations on May 7.
    “We were inspired by human visual generalization capabilities to try to simulate human imagination in machines,” said the study’s lead author Yunhao Ge, a computer science PhD student working under the supervision of Laurent Itti, a computer science professor.
    “Humans can separate their learned knowledge by attributes — for instance, shape, pose, position, color — and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.”
    AI’s generalization problem
    For instance, say you want to create an AI system that generates images of cars. Ideally, you would provide the algorithm with a few images of a car, and it would be able to generate many types of cars — from Porsches to Pontiacs to pick-up trucks — in any color, from multiple angles. More

  • in

    Air-powered computer memory helps soft robot control movements

    Engineers at UC Riverside have unveiled an air-powered computer memory that can be used to control soft robots. The innovation overcomes one of the biggest obstacles to advancing soft robotics: the fundamental mismatch between pneumatics and electronics. The work is published in the open-access journal, PLOS One.
    Pneumatic soft robots use pressurized air to move soft, rubbery limbs and grippers and are superior to traditional rigid robots for performing delicate tasks. They are also safer for humans to be around. Baymax, the healthcare companion robot in the 2014 animated Disney film, Big Hero 6, is a pneumatic robot for good reason.
    But existing systems for controlling pneumatic soft robots still use electronic valves and computers to maintain the position of the robot’s moving parts. These electronic parts add considerable cost, size, and power demands to soft robots, limiting their feasibility.
    To advance soft robotics toward the future, a team led by bioengineering doctoral student Shane Hoang, his advisor, bioengineering professor William Grover, computer science professor Philip Brisk, and mechanical engineering professor Konstantinos Karydis, looked back to the past.
    “Pneumatic logic” predates electronic computers and once provided advanced levels of control in a variety of products, from thermostats and other components of climate control systems to player pianos in the early 1900s. In pneumatic logic, air, not electricity, flows through circuits or channels and air pressure is used to represent on/off or true/false. In modern computers, these logical states are represented by 1 and 0 in code to trigger or end electrical charges.
    Pneumatic soft robots need a way to remember and maintain the positions of their moving parts. The researchers realized that if they could create a pneumatic logic “memory” for a soft robot, they could eliminate the electronic memory currently used for that purpose.
    The researchers made their pneumatic random-access memory, or RAM, chip using microfluidic valves instead of electronic transistors. The microfluidic valves were originally designed to control the flow of liquids on microfluidic chips, but they can also control the flow of air. The valves remain sealed against a pressure differential even when disconnected from an air supply line, creating trapped pressure differentials that function as memories and maintain the states of a robot’s actuators. Dense arrays of these valves can perform advanced operations and reduce the expensive, bulky, and power-consuming electronic hardware typically used to control pneumatic robots.
    After modifying the microfluidic valves to handle larger air flow rates, the team produced an 8-bit pneumatic RAM chip able to control larger and faster-moving soft robots, and incorporated it into a pair of 3D-printed rubber hands. The pneumatic RAM uses atmospheric-pressure air to represent a “0” or FALSE value, and vacuum to represent a “1” or TRUE value. The soft robotic fingers are extended when connected to atmospheric pressure and contracted when connected to vacuum.
    By varying the combinations of atmospheric pressure and vacuum within the channels on the RAM chip, the researchers were able to make the robot play notes, chords, and even a whole song — “Mary Had a Little Lamb” — on a piano. Click here to view a video of the robot playing piano.
    In theory, this system could be used to operate other robots without any electronic hardware and only a battery-powered pump to create a vacuum. The researchers note that without positive pressure anywhere in the system — only normal atmospheric air pressure — there is no risk of accidental overpressurization and violent failure of the robot or its control system. Robots using this technology would be especially safe for delicate use on or around humans, such as wearable devices for infants with motor impairments.
    Story Source:
    Materials provided by University of California – Riverside. Original written by Holly Ober. Note: Content may be edited for style and length. More