12/11/2020

    Reliable and Distributed Media Processing Technology based on Secured Sparse CodingNTT Network Innovation Laboratories

    Overview

    Sparse coding proves to be extraordinary powerful in extracting critical information from big data. The main advantages of sparse coding over deep learning are three-fold, 1). The model can be well-trained and achieve excellent performance even when the training data is scarce, 2). The computational complexity is much smaller that allows real-time processing, and 3). The results and optimization process are interpretable and controllable. To this end, sparse coding is gaining increasing attention for practical implementation. In this report, we propose a secured machine learning technique, which enables sparse modeling directly on encrypted data. In this way, when we need to resort to the hierarchically distributed computing structure, consisting of the edge and cloud, for accelerated computation, the privacy can be well preserved.

    Background and existing issues

    With the advancing of the big data, Internet of Things (IoT), and Artificial Intelligence (AI), the quality as well as the quantity of data contents are astonishingly growing. However, it is observed that most of the data might be redundant, and thus, it is necessary to analyze and pre-process the data beforehand. Due to the huge computational demand brought by big data, edge computing and cloud computing are becoming popular. However, this may lead to serious privacy concerns, especially when the manipulation of the original data at the edge/cloud is allowed, as the information could be collected and misused without the permission by the third party.

    Advantages of this technology

    • Directly compress or recognize the encrypted multimedia data, such as images.
    • Interpretable machine learning, and the results are explainable.
    • Can be well-trained and achieve excellent performance with small data.

    Use Scenes

    • As the privacy is preserved, it becomes possible to provide reliable cloud services to our customers, such as SNS service. In addition, customized services are allowed, as the technique works with small data.
    • This technique could be useful in public services, such as surveillance cameras. While face image capturing/recording may lead to significant privacy concerns, by performing face recognition directly on the encrypted images, the privacy can be preserved, which further contributes to the public safety.

    Explanatory chart

    Technical explanation

    Extracting essential information from big data with far fewer measurements than traditionally assumed is possible. For this purpose, observing that sparsity exists in many circumstances, e.g., the primary visual cortex of human beings can be mathematically modeled as a sparse representation problem, sparse coding proves to be a powerful tool towards a wide range of application fields, especially in signal processing (image, voice signals) and machine learning (pattern recognition of biological signals). The learning process of sparse coding can be accomplished using K-SVD algorithm, and the testing process can be accomplished using OMP or LASSO algorithms. In the proposed secured sparse coding, we embrace random unitary transform for encryption, which enables that the training/testing processes can be performed directly on the encrypted domain. In addition, it is proved both theoretically and through simulation that the results of the two processes stays the same despite of the encryption. The potential application scenarios include secured data compression and secured pattern recognition. With smaller amount of training data and less computational resources, it is possible to achieve equivalent performance as deep learning-based algorithms. Besides, we further establish a secured and distributed computing structure consisting of the cloud, edge, and user devices. User devices upload the encrypted data to the edge server for training, then, the results are combined at the cloud server to improve the performance of compression and recognition.

    Glossary

    Random unitary transform
    In the random unitary matrix, each element are randomly generated based on Gram-Schmidt orthogonalization and unitary transform, where the Euclidean distance or the inner product does not change before/after the transform.

    K-SVD (K-Sigular Value Decomposition)
    A typical training algorithm for sparse modeling, which could extract essential feature from big data, and formulate a feature dictionary for sparse coding. The columns of the dictionary is called atoms, which is the most basic elements to formulate the data.

    OMP (Orthogonarl Matching Pursuit)
    A typical testing algorithm for sparse modeling, which solves the optimization problem with L0 norm regulation constraint.

    LASSO (Least Absolute Shrinkage and Selection Operator)
    A typical testing algorithm for sparse modeling, which solves the optimization problem with L1 norm regulation constraint.

    Department in charge

    NTT Network Innovation Laboratories - Frontier Communication Laboratory

    Related content