Sparse coding proves to be extraordinary powerful in extracting critical information from big data. The main advantages of sparse coding over deep learning are three-fold, 1). The model can be well-trained and achieve excellent performance even when the training data is scarce, 2). The computational complexity is much smaller that allows real-time processing, and 3). The results and optimization process are interpretable and controllable. To this end, sparse coding is gaining increasing attention for practical implementation. In this report, we propose a secured machine learning technique, which enables sparse modeling directly on encrypted data. In this way, when we need to resort to the hierarchically distributed computing structure, consisting of the edge and cloud, for accelerated computation, the privacy can be well preserved.
With the advancing of the big data, Internet of Things (IoT), and Artificial Intelligence (AI), the quality as well as the quantity of data contents are astonishingly growing. However, it is observed that most of the data might be redundant, and thus, it is necessary to analyze and pre-process the data beforehand. Due to the huge computational demand brought by big data, edge computing and cloud computing are becoming popular. However, this may lead to serious privacy concerns, especially when the manipulation of the original data at the edge/cloud is allowed, as the information could be collected and misused without the permission by the third party.
Extracting essential information from big data with far fewer measurements than traditionally assumed is possible. For this purpose, observing that sparsity exists in many circumstances, e.g., the primary visual cortex of human beings can be mathematically modeled as a sparse representation problem, sparse coding proves to be a powerful tool towards a wide range of application fields, especially in signal processing (image, voice signals) and machine learning (pattern recognition of biological signals). The learning process of sparse coding can be accomplished using K-SVD algorithm, and the testing process can be accomplished using OMP or LASSO algorithms. In the proposed secured sparse coding, we embrace random unitary transform for encryption, which enables that the training/testing processes can be performed directly on the encrypted domain. In addition, it is proved both theoretically and through simulation that the results of the two processes stays the same despite of the encryption. The potential application scenarios include secured data compression and secured pattern recognition. With smaller amount of training data and less computational resources, it is possible to achieve equivalent performance as deep learning-based algorithms. Besides, we further establish a secured and distributed computing structure consisting of the cloud, edge, and user devices. User devices upload the encrypted data to the edge server for training, then, the results are combined at the cloud server to improve the performance of compression and recognition.
Random unitary transform
In the random unitary matrix, each element are randomly generated based on Gram-Schmidt orthogonalization and unitary transform, where the Euclidean distance or the inner product does not change before/after the transform.
K-SVD (K-Sigular Value Decomposition)
A typical training algorithm for sparse modeling, which could extract essential feature from big data, and formulate a feature dictionary for sparse coding. The columns of the dictionary is called atoms, which is the most basic elements to formulate the data.
OMP (Orthogonarl Matching Pursuit)
A typical testing algorithm for sparse modeling, which solves the optimization problem with L0 norm regulation constraint.
LASSO （Least Absolute Shrinkage and Selection Operator）
A typical testing algorithm for sparse modeling, which solves the optimization problem with L1 norm regulation constraint.
NTT Network Innovation Laboratories - Frontier Communication Laboratory