How network pruning can skew deep learning models
Computer science researchers have demonstrated that a widely used technique called neural network pruning can adversely affect the performance of deep learning models, detailed what causes these performance problems, and demonstrated a technique for addressing the challenge.
Deep learning is a type of artificial intelligence that can be used to classify things, such as images, text or sound. For example, it can be used to identify individuals based on facial images. However, deep learning models often require a lot of computing resources to operate. This poses challenges when a deep learning model is put into practice for some applications.
To address these challenges, some systems engage in “neural network pruning.” This effectively makes the deep learning model more compact and, therefore, able to operate while using fewer computing resources.
“However, our research shows that this network pruning can impair the ability of deep learning models to identify some groups,” says Jung-Eun Kim, co-author of a paper on the work and an assistant professor of computer science at North Carolina State University.
“For example, if a security system uses deep learning to scan people’s faces in order to determine whether they have access to a building, the deep learning model would have to be made compact so that it can operate efficiently. This may work fine most of the time, but the network pruning could also affect the deep learning model’s ability to identify some faces.”
In their new paper, the researchers lay out why network pruning can adversely affect the performance of the model at identifying certain groups — which the literature calls “minority groups” — and demonstrate a new technique for addressing these challenges. More