Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ADVERSARIAL PROBABILISTIC REGULARIZATION
Document Type and Number:
WIPO Patent Application WO/2019/162364
Kind Code:
A1
Abstract:
A method of training a supervised neural network to solve an optimization problem that involves minimizing an error function f(θ) where θ is a vector of independent and identically distributed (i.i.d.) samples of a target distribution £ t is proposed. The method includes generating an adversarial probabilistic regularizer (APR) ϕ £t (θ) using a discriminator of a generative adversarial network. The discriminator receives samples from θ and samples from a regularizer distribution p r as inputs. The APR ϕ £t (θ) is then added to the error function f(θ) for each training iteration of the supervised neural network.

Inventors:
SUN XIAOXIA (US)
SHAH MOHAK (US)
KURUP UNMESH (US)
SUN JU (US)
Application Number:
PCT/EP2019/054286
Publication Date:
August 29, 2019
Filing Date:
February 21, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BOSCH GMBH ROBERT (DE)
International Classes:
G06N3/04; G06N3/08; G06N5/00
Other References:
J. ZHAO ET AL: "Adversarially regularized autoencoders", ARXIV:1706.04223V2, 15 November 2017 (2017-11-15), XP055542165, Retrieved from the Internet [retrieved on 20190521]
R. FATHONY ET AL: "Discrete Wasserstein generative adversarial networks (DWGAN)", OPENREVIEW.NET: REV. 13 FEB. 2018, 13 February 2018 (2018-02-13), XP055590559, Retrieved from the Internet [retrieved on 20190521]
Y. BAI ET AL: "ProxQuant: quantized neural networks via proximal operators", ARXIV:1810.00861V2, 8 October 2018 (2018-10-08), XP080928782, Retrieved from the Internet [retrieved on 20190521]
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of training a supervised neural network to solve an optimization problem, the optimization problem involving minimizing an error function /(0) where Q is a vector of independent and identically distributed (i.i.d.) samples of a target distribution Lt, the method comprising:

generating an adversarial probabilistic regularizer (APR) using a discriminator of a generative adversarial network, the discriminator receiving samples from Q and samples from a regularizer distribution pr as inputs; and

adding the APR 0xt(0) to the error function /(0) for each training iteration of the supervised neural network.

2. The method of claim 1 , wherein the target distribution £t is a discrete distribution.

3. The method of claim 1 , wherein the optimization problem is given by

min/(0) + lf£ί(q),

wherein l is a scaling coefficient.

4. The method of claim 3, wherein the APR 0xt(0) is given by

wherein f represents a deep neural network, and

wherein the optimization problem is given by

after the APR 0£t(0) is substituted into the optimization problem.

5. The method of claim 4, wherein the error function is given by

/(0) = E(X:,y)~£D r^((a:, y); 0)l wherein data-label pairs (x,y) ~ LD and wherein is a loss function, and wherein the optimization problem is given by

after the error function /(0) is substituted into the optimization problem.

6. The method of claim 2, wherein the discrete distribution is a binary distribution.

7. The method of claim 6, wherein the target distribution is set to

p(0 = 1) = p(0 = -1) = ½.

8. The method of claim 2, wherein the discrete distribution is a ternary distribution.

9. The method of claim 8, wherein the target distribution is set to

10. A neural network training system comprising: a non-transitory computer readable storage medium storing programmed instructions; and

a processor configured to execute the programmed instructions,

wherein the programmed instructions include instructions which, when executed by the processor, cause the processor to perform a method of training a supervised neural network to solve an optimization problem, the optimization problem involving minimizing an error function /(0 ) where Q is a vector of independent and identically distributed (i.i.d.) samples of a target distribution Lt, the method comprising:

generating an adversarial probabilistic regularizer (APR) 0£t(0) using a discriminator of a generative adversarial network, the discriminator receiving samples from Q and samples from a regularizer distribution pr as inputs; and

adding the APR F^ ) to the error function /(0 ) for each training iteration of the supervised neural network.

1 1 . The system of claim 10, wherein the target distribution Lt is a discrete distribution.

12. The system of claim 10, wherein the optimization problem is given by

min/(0) + lf£ί(q),

wherein l is a scaling coefficient.

13. The system of claim 12, wherein the APR f£ί(q) is given by

wherein f represents a deep neural network, and

wherein the optimization problem is given by

after the APR 0£t(0) is substituted into the optimization problem.

14. The system of claim 13, wherein the error function is given by

/(0) = E(X:,y)~£D r^((a:, y); 0)l wherein data-label pairs (x,y) ~ LD and wherein is a loss function, and wherein the optimization problem is given by

after the error function /(0) is substituted into the optimization problem.

15. The system of claim 11 , wherein the discrete distribution is a binary distribution.

16. The system of claim 15, wherein the target distribution is set to

p(0 = 1) = p(0 = -1) = ½.

17. The system of claim 1 1 , wherein the discrete distribution is a ternary distribution.

18. The system of claim 17, wherein the target distribution is set to

r(b = 1) = p(0 = -1) = , p(0 = 0) = 1 - p.

Description:
ADVERSARIAL PROBABILISTIC REGULARIZATION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Application Serial No. 62/634,332 entitled "ADVERSARIAL PROBABLISTIC REGULARIZATION " by Sun et al„ filed February 23, 2018, the disclosure of which is hereby incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] This disclosure relates generally to neural networks, and, in particular, to training neural networks.

BACKGROUND

[0003] Many problems in machine learning involve solving an optimization problem in the conceptual form

«,¾ /(0), s. t. Q ~i. i. d. L t . (1 )

Here L t is a target distribution. Two examples which involve this optimization problem include sparse regression and supervised neural networks. For sparse regression, /(0) is the data fitting error (error function), and L t is a distribution that favors a sparse or compressible Q (e.g., Bernoulli-Subgaussian or Laplacian). For supervised neural networks, /(0) is the training (i.e., data-fitting) error, and L t promotes certain structures on the network weights Q. For example, L t could be Gaussian that ensures the weight distribution is“democratic”. A more interesting case in practice is when L t is a discrete distribution, say binary on {+1 , -1} or ternary on {+1 , 0, -1} — these distributions lead to compact (i.e., quantized and sparse) networks that are efficient in inference, desirable for hardware implementation, and also robust to adversarial examples. [0004] This disclosure is focused primarily on training compact supervised neural networks for solving problems of the above form (1 ). In order to turn form (1 ) into a concrete

computational problem, a regularized version of form (1 ) is considered:

min/(0) + lf £ί (q). (2)

Here, the coordinates of Q are treated as i.i.d. (independent and identically distributed) samples of a target distribution L t , and small amounts to closeness of the empirical distribution of coordinates of Q to £ t . For the purpose of this disclosure, F^ίb) is referred to as a probabilistic regularizer. The tunable parameter l > 0 controls the relevant strength of the regularizer with respect to /(0).

[0005] Given £ t , it is natural to choose 0 £t (0) as certain monotone functions of the probability density function (PDF), similar to how priors are encoded in Bayesian inference. Two challenges stand out: (i) A general probability distribution may not have a density function, or even if it has, the density function may not be in any closed form (ii) The density function may be discontinuous— discrete distributions that we are particularly interested in having discretely supported PDF’s. To optimize (2) in large-scale settings using derivative-based methods or other scalable methods, considerable analytic and design efforts are needed to tackle the two challenges.

[0006] Another natural choice is to make 0 £t (0) the discrepancy between empirical moments of the coordinate distributions to those of the target £ t , i.e., under the umbrella of moment matching method. This approach tends to cause significant computational burden due to moment calculation, and it is also not suitable for distributions with unbounded moments (e.g., heavy-tailed distributions). SUMMARY

[0007] According to one embodiment of the present disclosure, a method of training a supervised neural network to solve an optimization problem that involves minimizing an error function f{6 ) where Q is a vector of independent and identically distributed (i.i.d.) samples of a target distribution L t is proposed. The method includes generating an adversarial probabilistic regularizer (APR) f^ ) using a discriminator of a generative adversarial network. The discriminator receives samples from Q and samples from a regularizer distribution p r as inputs. The APR 0 £t (0) is then added to the error function f{6 ) for each training iteration of the supervised neural network.

[0008] According to another embodiment of the present disclosure, a neural network training system is provided that includes a memory for storing programmed instructions and a processor configured to execute the programmed instructions. The programmed instructions include instructions which, when executed by the processor, cause the processor to perform a method of training a supervised neural network to solve an optimization problem that involves minimizing an error function f{6 ) where Q is a vector of independent and identically distributed (i.i.d.) samples of a target distribution L t is proposed. The method includes generating an adversarial probabilistic regularizer (APR) 0 £t (0) using a discriminator of a generative adversarial network. The discriminator receives samples from Q and samples from a regularizer distribution p r as inputs. The APR 0 £t (0) is then added to the error function f{6 ) for each training iteration of the supervised neural network.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 is a schematic illustration of a neural network training system according to the present disclosure. [0010] FIG. 2 depicts an algorithm for generating an adversarial probabilistic regularizer (APR).

[0011] FIG. 3 shows a table that compares APR and GMM-regularized networks.

[0012] FIG. 4 shows histograms of weights for each layer of LeNet-5.

[0013] FIG. 5 depicts the evolution of weight distribution at the end of epochs 1 , 10, 50, 100 and 400 for training ResNet-44 on CIFAR-10.

[0014] FIG. 6 shows a table of the classification error of binary and ternary networks.

[0015] FIG. 7 shows the learning curve for training ResNet-20 with ternary weights.

[0016] FIG. 8 is a schematic illustration of a computing device for implementing the framework described herein.

DETAILED DESCRIPTION

[0017] For the purposes of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiments illustrated in the drawings and described in the following written specification. It is understood that no limitation to the scope of the disclosure is thereby intended. It is further understood that the present disclosure includes any alterations and modifications to the illustrated embodiments and includes further applications of the principles of the disclosure as would normally occur to a person of ordinary skill in the art to which this disclosure pertains.

[0018] This disclosure is directed to systems and methods for training supervised neural networks including a regularizer 0 £t (0) that has minimal restriction on the target distribution L t . The approach is inspired by the recent empirical successes of Generative Adversarial Networks (GANs) in learning distributions of natural images or languages. The central idea of the approach described herein is that the distribution matching problem is rephrased as a distribution learning problem in the GAN framework, which results in a natural parameterized regularizer that is learned from data. [0019] GANs were first proposed to generate naturally looking images and have subsequently been extended to various other applications, including semi-supervised learning, image super-resolution, and text generation.

[0020] GAN works by emulating a competitive game between a generator G and a discriminator D, both of which are functions: given a target distribution L t and a noisy (i.e., uninformative) distribution L n , G learns to generate samples of the form G(z) from z ~ L n to fool

D, and meanwhile D learns to discern the true samples x ~ L t versus the fake samples G(z).

Ideally, at the equilibrium, G learns the true distribution L t such that G(z) Mathematically,

D learns to assign high values to true samples and low values to fake samples, and the game can be realized as a saddle point optimization problem:

77 x ~c t }og D(x)] - E z ^ [log(l - D(G(z)))] .

[0021] This formulation fails to learn degenerate distributions, e.g., discrete distributions or distributions supported on a low-dimensional manifolds, due to the choice of a strong distance metric for distributions. Wasserstein GAN (WGAN) was proposed to mitigate some of the issues, which uses the weaker metric earth mover distance or Wasserstein-1 (W-1 ) distance.

For two distributions L and L 2 , this distance is computed as

where ||/|| L denotes the Lipschitz constant of f. Thus, minimizing the W-1 distance between the generator distribution and the target distribution yields the minimax problem:

This simple change to the metric has led to improved learning performance over several tasks.

[0022] In this disclosure, discrete distributions are of interest, and hence the W-1 distance is a reasonable metric to work with, as in WGAN. This motivates the following choice for probabilistic regularizer 0x t (G):

[0023] Since only finite-dimensional Q is considered, an empirical distribution for the second term has been directly substituted with the term iåf =1 (bi).

[0024] As is standard in the GAN literature, the function ip R >® R is realized as a deep network, with weight vector w. So y(· w) is used to make the dependency explicit. Combining this with (2), the central optimization problem of this disclosure is obtained as:

[0025] One remarkable feature of this approach inherent from the GAN framework is that only samples from the target distribution L t are needed, as dictated by the E g Lt [ip(6 w)] term. This compares favorably to approaches that rely on the existence of PDF’s with reasonable regularity (e.g., closed-form and possibly also differentiability), when samples can be easily obtained. This is the case for learning discrete distributions.

[0026] FIG. 1 depicts conceptual diagram of a neural network training system 10 that uses a discriminator network from a GAN to generate an adversarial probabilistic regularizer (APR) in accordance with the present disclosure. As depicted in FIG. 1 , there is a primal learner (error function) /(0) 12 and a discriminator network f(-, w ' ) 14, parameterized by w. The primal learner 12 tries to find Q that makes /(0) small and meanwhile carries an empirical distribution of coordinates faking the discriminator. The discriminator 14 tries to find w so that it can distinguish true samples from the target distribution L t and“fake” samples from coordinates of Q. The discriminator 14 outputs the APR 0 £t (0) which is added to the error function /(0) at adding node 16. The output of the adding node 16 corresponds to min /(0) + lf £ί (q).

[0027] The framework described herein could be subject to the same generator-discriminator game interpretation as shown in GAN (FIG. 1 ), but there are two important differences from the classical GAN. First, there is no generator and the framework works directly with the empirical samples. There is only a finite number of empirical samples, which are coordinates of the finite- dimensional vector 0. In contrast, the classical GANs are expected to learn an effective generator that (hopefully) always generates samples according to L t from samples of L n .

Second, there is an additional /(0) term to be minimized also when generating empirical samples {0*} (i.e., all the coordinates of 0) to match/fool the discriminator network.

[0028] To adapt this approach to learn compact neural networks, the model optimization problem (5) is modified into a supervised learning problem based on deep neural networks (DNN). Given data-label pairs ( x, y ) ~ L D , the following function is defined:

where the loss function 0) is defined on top of a certain DNN parametrized by 0.

Substituting this into the optimization problem (5) results in a saddle-point optimization problem that takes the following form:

[0029] Due to the practical advantage of quantized and sparse weights on training and inference, the target distribution L t can be set toward appropriately learning compact networks. We can set, e.g.,

p(0 = 1) = p(0 = -1) = 1/2,

to learn quantized, binary networks, or

for a small p e (0, 1), to learn sparse and quantized networks. The optimization algorithm we use is the same as that of the classical GAN, i.e., alternating (stochastic) gradient descent and ascent, which is summarized in the algorithm depicted in FIG. 2. At convergence, a simple one- shot rounding is applied coordinate-wise to 0.

[0030] Two dominant approaches exist in literature to compare and contrast the present approach to previous ones for network quantization and sparsification. These approaches are divided on whether quantization and sparsification intervene in the training process. Many existing methods operate on trained networks without exercising any proactive control on the potential loss of prediction accuracy due to quantization and sparsification. In contrast, other recent methods perform simultaneous training and quantization (and/or sparsification). The present method lies in the second approach.

[0031] Direct training subject to the quantization and sparsification constraint entails hard discrete optimization. Existing methods differ on how to softly implement the constraint. One possibility is to heuristically intertwine the gradient descent and quantization (possibly also sparsification) step.

[0032] The immediate quantization steps tend to save substantially forward- and backward- propagation cost. However, these methods are not principled from the optimization viewpoint. Another possibility is to embed the entire learning problem into a Bayesian framework, such that quantization and sparsity can be promoted via imposing appropriate Bayesian priors on the network weights. Adopting the Bayesian framework has shown to be favorable for network compression, i.e., exhibiting an automatic regularization effect. Also, in theory, it is possible to impose arbitrary desirable structural priors on the weights. However, discrete distributions are not suitable for practical Bayesian inference via numerical optimization. Analytic tricks, such as reparametrization or continuous relaxations, are needed to find surrogates for discrete distributions so that effective computation can be performed.

[0033] Compared to the above possibilities, the quantization and sparsification is encoded via an adversarial network that is fed with samples from the desired discrete distribution directly. The discreteness prior is enforced in a principled manner. The (sometimes substantial) analytic effort of deriving benign surrogates for discrete distributions, as needed in the Bayesian framework, is saved by requiring only samples from the discrete target distributions which are often easy to obtain. [0034] Following is a description of three tricks which may be used in implementation. These tricks are not necessary but may be beneficial. The first trick is clipping of w. Note that optimizing (5) and (6) is subject to the constraint that ///(·; w ) is 1 -Lipschitz, where the constant 1 can be changed to any bounded K by adjusting l accordingly. So it is enough to make ///(·; w ) Lipschitz. Since ip w ) is realized as a neural network, it is Lipschitz whenever w is bounded. This can be approximated by projecting each w* into [-1 , 1] after each update.

[0035] Another trick is weighted sampling of Q. The coordinates of Q are assumed to be i.i.d.. However, when training deep networks, different layers may have vastly different numbers of nodes, leading to disparity in number of weights - this is especially true for the first and last layers, which usually have small numbers of weights compared to other layers. The disparity leads to difficulty of quantization for the first and last layers, as layers with significant numbers of weights tend to be sampled more frequently in a stochastic optimization setting and hence their weights tend to converge to the target distribution fast. In APR framework, the problem can be easily solved by reweighted sampling: let N t be the number of weights in the /- th layer.

Probability of sampling weights in the /-th layer is scaled by the factor MN t .

[0036] The third trick is homotopy continuation on L t . For a discrete target distribution L t , ideally the discriminator // / (·; w) will be discretely supported, which may cost a neural network substantial time to learn to approximate. A homotopy continuation technique may be used that moves the distribution gradually toward the target distribution L t , from a“nice” auxiliary distribution L a \

Here x is the time factor, and T is the total training epochs. L a can be conveniently chose as the continuous uniform distribution that covers the range of L t . This can be considered as a crude graduated smoothing process for discrete distributions, which are controlled via inputting mixture samples— a distinctive feature of our method. This can be contrasted to the delicate analytic smoothing or reparameterization techniques for discrete distributions. This homotopy continuation empirically improves the convergence speed but is not necessary for convergence.

[0037] The present disclosure is focused on solving problem of form (1 ), particularly in the context of learning quantized and sparse neural networks where L t is a discrete distribution. Prior approaches either solve the resulting mixed continuous-discrete optimization problem by the projected gradient heuristic (i.e., gradient descent mixed with quantization and/or sparsification), or embed the problem into a Bayesian framework, deploying which necessarily entails resolving analytic and computational issues around the discrete distribution. In contrast, this disclosure proposes an adversarial probabilistic regularization (APR) framework for the problem, with the following characteristics:

(1 ) The regularizer, which is implemented based on a deep network, is (almost everywhere— a.e.) differentiable. So if /(0) is a.e. differentiable, which is true particularly when it is also based on a deep network, the combined minimax objective in (5) is amenable to gradient based optimization methods. The Lipschitz constraint in (5) can be implemented as a convex constraint on w. So the resulting optimization problem tends to be nicer than that derived from the mixed continuous-discrete approach from an optimization viewpoint.

(2) The regularization needs only samples from L t but not L t itself. This allows considerable generality in selecting L t so long as samples can be easily obtained; when L t is a discrete distribution, sampling is particularly straightforward. This avoids the many analytic and computational hurdles around the Bayesian approach.

[0038] The simple method proposed herein compares favorably to state-of-the-art methods for network quantization and sparsification. For the method proposed herein, the coordinates of Q are assumed to be i.i.d., which might be restrictive for certain applications. The Bayesian framework is not subject to the restriction in theory, but analytic and computational tractability might be an issue, as we discussed above. When Q is sufficiently long, say for deep networks, it is possible to generalize the present framework to encode distributions priors on short segments of Q.

[0039] For network quantization and sparsification, methods that perform immediate quantization and sparsification at each optimization iteration tend to save substantial amounts of forward- and backward-propagation computation. The present method can be easily modified to perform the immediate operations, although as remarked above, this is less principled from the optimization viewpoint.

[0040] Several methods (), including the present method, have reported performances of quantized networks to be comparable to those of real-valued networks. In theory, the capacity of quantized networks is still not well understood. For example, whether there will be a universal approximation theorem for quantized networks is not clear yet.

[0041] Experiments were conducted for tasks of sparse recovery and image classification to study the behavior and verify the effectiveness of APR. The image classification was evaluated on two datasets, namely MNIST and CIFAR-10. Comparison methods used include generative momentum matching (GMM), binary connect, trained ternary quantization (TTQ), variational network quantization (VNQ), and training.

[0042] The GMM is mostly related to the GANs-based approach. To the best of our knowledge, GMM has not been developed or employed for regularization purpose.

Nevertheless, we exploit the GMM for probabilistic regularization purpose and compare with APR. More specifically, given a set of samples v = {I? ; } from regularization distribution p r and a set of weights {¾ }, the distribution distance between the two sets of samples is measured by maximum mean discrepancy (MMD)

where k is a Gaussian kernel with a bandwidth s in order to match high order moments. To train a deep network with weights constrained to arbitrary prior p r using GMM, we minimize the empirical loss function (2) where the regularizer f is defined by (8). To achieve better performance, the heuristics employed in (8) is followed: a square root of the MMD is used as the regularizer and a mixture of Gaussian k = å s k f is adopted as the kernel function.

[0043] The present approach is compared with binary connect on a VGG-like deep network for the case of network binarization. The present approach was compared with TTQ as a baseline for network ternarization on the residual networks with 20, 32, 44 and 56 layers which have 0.27M, 0.46M, 0.66M and 0.85M learnable parameters, respectively. The approach was also compared with a recently proposed continuous relaxation-based approach, namely variational network quantization (VNQ) for network ternarization. In conformity of experimental settings, the approach was compares with VNQ on DenseNet-121.

[0044] Adam was used to train the quantized network and adopt default hyper-parameter settings to train the primary network. Adam hyper-parameter for the regularization network is set to be /¾ = 0.5, b 2 = 0.9. The baseline models are also trained with Adam for a fair comparison. The sample batch size for the critic is 256. The weight learning rates are scaled by the weight initialization coefficient. Throughout the experiment, we enforce the weights to have binary or ternary values. For the ternary network, we evaluate the priors with various sparsity levels. We follow a conventional image preprocessing and augmentation for the corresponding datasets. We construct the regularization network based on a multilayer perceptron (MLP) with three hidden layers and ReLU as the activation function.

[0045] First, network binarization and ternarization was conducted for digits classification on MNIST dataset. In this experiment, a modified LeNet-5 was adopted which contains four weight layers with 1.26M learnable parameters. The quantized networks are trained from a pretrained full-precision model with baseline error 0.76%. Learning rate starts at 0.001 and linearly decays to zero after 200 epochs. The performance of APR and GMM-regularized network were compared in this experiment. The learning schedule was the same for both approaches.

Bandwidth parameter for the Gaussian mixture kernel k was set to be {0.001 , 0.005, 0.01 , 0.05, 0.1}. The regularization parameter for GMM was set to l = 1 O 3 and l = 10 4 for APR.

[0046] Following is a comparison of APR and GMM-regularized networks. Referring to table depicted in FIG. 3, APR (shown as APR-T in the table, T for ternary weights) achieves a competitive performance of 0.83% error, which outperforms GMM (shown as GMM-T) by 0.6%. Both approaches enforce weight distribution with sharply ternary patterns. However, regularizing deep networks with GMM encounters scalability issues even with small networks such as LeNet-5. In order to estimate the kernels in (8), the computational cost of the GMM regularizer grows quadratically w.r.t. the number of weights. In the case of LeNet-5, only 1 % of the weights are randomly selected and regularized at each step, which still requires 10 7 kernels to be computed at each step. On the contrary, the computation cost of APR grows linearly w.r.t. the number of weights given a fixed size regularization network.

[0047] First and last layers of deep networks poses more difficulties for quantization, due to the unbalanced size of different layers. The problem with LeNet-5 quantization is especially severe: the four layers of the networks contains 500, 0.25M, 1.2M and 5K number of weights, leading the empirical distribution r(w) to be dominated by the third layer. As proposed above, this problem can be easily solved by employing weighted sampling trick. The histograms of weights for each layer of LeNet-5 is illustrated in FIG. 4. Uniform weights and weights which have been reweighted employing the weighted sampling trick described above area shown for each layer. For both cases, weights of the third layer converge to a ternary pattern where both histograms overlap each other. However, weights of the first layer failed to fit the regularization prior without adopting weighted sampling. On the contrary, weights from all four layers exhibits strong ternary pattern with an employment of weighted sampling. [0048] The classification performance of APR-regularized network was evaluated on the dataset of Cl FAR-10 which consists of 50,000 training and 10,000 testing RGB images of size 32x32. A standard data preparation strategy was used on Cl FAR-10: both the training and testing images are preprocessed by per-pixel mean subtraction. The training set is augmented by padding 4 pixels on each side of the image and randomly crop a 32 c 32 region. The minibatch size for training the primary network is 128. The approach was evaluated on VGG-9 and ResNet-20, 32, 44.

[0049] In this experiment, the weights were enforced to have either binary or ternary values. For fair comparison, the same quantization protocol was followed, i.e., the first convolution layer and the fully connected layer are not quantized since they only contain less than 0.4% of total amount of weights. The deep neural networks are trained with a total number of 400 epochs with an initial learning rate of 0.01. The learning rate is decayed by a factor of 10 at the end of epoch 80, 120 and 150. No weight decay is used since APR is already a strong regularization on the weights. To facilitate the convergence of the network, homotopy continuation was employed by adopting an auxiliary uniform distribution p s ~U[— 1,1]. Since APR along does not enforce the discrete value, rounding noise is added to the weights after 350 epochs.

[0050] The evolution of weight distribution at the end of epochs 1 , 10, 50, 100 and 400 for training ResNet-44 on Cl FAR-10 is shown in FIG. 5. The upper row shows binary weights, and the lower row shows ternarized weights. The solid line corresponds to empirical distribution of weights according to regularization function y(0), scaled to [0, 1] for display purposes. The dotted line shows regularization distribution p r . The discrete distribution was smoothed for display purpose. The shaded area shows the empirical distribution p(0) of weights. Blue solid line: evaluation of regularization function y(0), scaled to [0, 1] for display purpose. As can be seen, empirical distribution of weights according to regularization function y(0) (solid line) approaches the discrete prior p r .

[0051] The learning curve for training ResNet-20 with ternary weights is shown in FIG. 7 where the first 200 epochs are demonstrated. Given a strong regularization (2 = 1CT 5 ), training the primary network is stagnated without homotopy continuation (black lines). On the contrary, the network resumes to converge while reaching weights with ternarized patterns at the same time when homotopy continuation is employed (red lines). By choosing a small value of l = 1C) -5 , the loss / also drops quickly by implicitly relaxing the discrete prior p r with the

regularization networks.

[0052] FIG. 6 shows a table of the classification error of binary and ternary networks. The present approach is compared with a full precision baseline model, binary connect (BC) and trained ternary quantization (TTQ). Although the present approach is able to train a discrete network from scratch, the network was trained using a pretrained full-precision model to have fair comparisons. APR-B refers to APR regularized with binary weights, and APR-T refers to APR regularized with ternary weights. Models that finetune from a pretrained full-precision network are marked with v in the table. The present approach achieves state-of-the-art performance on VGG-9, ResNet-20 and ResNet-32 for network ternarization. Deep networks ternarized with APR introduces minor performance drop compared to the full-precision counterpart on ResNet-44 and exceeds the full precision network on VGG-9, ResNet-20 and ResNet-32. On VGG-9, APR-B achieves an error of 7.82% and outperforms BC by 2.5%. The ternarized network APR-T further reduces the error to 7.47%.

[0053] FIG. 8 depicts an embodiment of a computer system 100 which may be used to implement the framework described herein. In particular, the computer system includes at least one processor 102, such as a central processing unit (CPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) device, or a micro-controller. The processor 102 is configured to execute programmed instructions that are stored in the memory 104. The memory 104 can be any suitable type of memory, including solid state memory, magnetic memory, or optical memory, just to name a few, and can be implemented in a single device or distributed across multiple devices. The programmed instructions stored in memory 104 include instructions for implementing various functionality in the system, including identifying candidates and candidate nodes for terminologies and using collective inference based on occurrence and co-occurrence statistics to score the candidates. The computing system may include one or more network interface device(s) 106 for transmitting and receiving data and communicating via a network.

[0054] While the disclosure has been illustrated and described in detail in the drawings and foregoing description, the same should be considered as illustrative and not restrictive in character. It is understood that only the preferred embodiments have been presented and that all changes, modifications and further applications that come within the spirit of the disclosure are desired to be protected.