Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM, METHOD AND APPARATUS FOR NEURAL NETWORKS
Document Type and Number:
WIPO Patent Application WO/2020/025932
Kind Code:
A1
Abstract:
An system, apparatus and method for utilizing software and hardware portions of a neural network to fix, or hardwire,certain portions while modifying other portions. A first set of weights for layers of the first neural network are established, and selected weights are modified to generate a second set of weights, based on a second dataset. The second set of weights is then used to train a second neural network.

Inventors:
WHATMOUGH PAUL NICHOLAS (GB)
MATTINA MATTHEW (GB)
BEU JESSE GARRETT (GB)
Application Number:
PCT/GB2019/052080
Publication Date:
February 06, 2020
Filing Date:
July 25, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ADVANCED RISC MACH LTD (GB)
International Classes:
G06F1/3234; G06N3/04; G06N3/063; G06N3/08
Other References:
SYED SHAKIB SARWAR ET AL: "Incremental Learning in Deep Convolutional Neural Networks Using Partial Network Sharing", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 December 2017 (2017-12-07), XP080845672
JASON YOSINSKI ET AL: "How transferable are features in deep neural networks?", 6 November 2014 (2014-11-06), pages 1 - 9, XP055277610, Retrieved from the Internet [retrieved on 20160603]
SAM LEROUX ET AL: "Transfer Learning with Binary Neural Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 29 November 2017 (2017-11-29), XP081298113
Attorney, Agent or Firm:
TLIP LTD (GB)
Download PDF:
Claims:
Claims:

1. A method comprising:

accessing a first neural network that has one or more layers;

training the first neural network on a first dataset;

generating a first set of weights associated with one or more layers of the first neural network, the first set of weights based on the training of the first neural network on the first dataset;

accessing a second dataset;

modifying selected ones of the first set of weights to generate a second set of weights, based on the second dataset; and

utilizing the second set of weights to train a second neural network.

2. The method as claimed in claim 1, further comprising identifying similarities of the first set of weights and the second set of weights.

3. The method as claimed in claim 1 or claim 2, further comprising determining that the first dataset is the same domain as the second dataset.

4. The method as claimed in any preceding claim, further comprising identifying one or more programmable layers.

5. The method as claimed in any preceding claim, further comprising:

identifying one or more layers in the first neural network; and utilizing the identified one or more layers in the first neural network in the second neural network.

6. The method as claimed in claim 5, further comprising updating one or more weights associated with the one or more layers of the first neural network.

7. The method as claimed in any preceding claim, further comprising:

identifying selected layers of the first neural network having particular connectivity properties; and

updating the identified selected layers of the first neural network based on the second neural network.

8. An apparatus comprising:

a first neural network that has one or more layers;

a first dataset that is used to train the first neural network;

a first set of weights associated with one or more layers of the first neural network, the first set of weights generated based on the training of the first neural network on the first dataset;

a second dataset;

a second set of weights generated by modifying selected ones of the first set of weights and the second set of weights based on the second dataset; and

a second neural network trained based on the second set of weights.

9. The apparatus as claimed in claim 8, where the first set of weights and the second set of weights are similar.

10. The apparatus as claimed in claim 8 or claim 9, where the first dataset is the same domain as the second dataset.

11. The apparatus as claimed in any of claims 8 to 10, where the one or more layers include one or more programmable layers.

12. The apparatus as claimed in any of claims 8 to 11, where the one or more layers include one or more layers in the first neural that are used in the second neural network.

13. The apparatus as claimed in claim 12, further comprising one or more weights associated with the one or more layers of the first neural network.

14. The apparatus as claimed in any of claims 8 to 13, where selected layers of the first neural network have particular connectivity properties.

15. A system comprising:

a memory; and

a processor, coupled to the memory, that executes instructions stored in the memory, the instructions comprising:

accessing a first neural network that has one or more layers;

training the first neural network on a first dataset;

generating a first set of weights associated with one or more layers of the first neural network, the first set of weights based on the training of the first neural network on the first dataset; accessing a second dataset;

modifying selected ones of the first set of weights to generate a second set of weights, based on the second dataset; and

utilizing the second set of weights to train a second neural network.

16. The system as claimed in claim 15, where the instructions further comprise identifying similarities of the first set of weights and the second set of weights.

17. The system as claimed in claim 15 or claim 16, where the instructions further comprise determining that the first dataset is the same domain as the second dataset.

18. The system as claimed in any of claims 15 to 17, where the instructions further comprise identifying one or more programmable layers.

19. The system as claimed in any of claims 15 to 18, where the instructions further comprise:

identifying one or more layers in the first neural network; and

utilizing the identified one or more layers in the first neural network in the second neural network.

20. The system as claimed in claim 19, where the instructions further comprise updating one or more weights associated with the one or more layers of the first neural network.

21. A method compri sing : accessing input data;

accessing a neural network;

identifying one or more first layers of the neural network;

parsing the one or more first layers of the neural network into a fixed portion and a programmable portion;

generating one or more first maps as a function of the input data and the fixed portion of the one or more first layers of the neural network;

generating one or more second maps as a function of the input data and the programmable portion of the one or more first layers of the neural network; utilizing the one or more first maps and the one or more second maps with subsequent layers of the neural network.

Description:
SYSTEM, METHOD AND APPARATUS FOR NEURAL NETWORKS

[001] Machine learning technology is continually evolving and has come to support many aspects of modem society, from web searches, content filtering, automated

recommendations on merchant websites, automated game playing, to object detection, image classification, speech recognition, machine translations, and drug discovery and genomics.

[002] The first, and most important stage in machine learning is training. For example, a machine learning system for the classification of images typically includes a large data set of images e.g., people, pets, cars, and houses, that has been collected and labeled with a corresponding category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. The objective is for the correct category to have the highest score of all categories.

[003] The accompanying drawings provide visual representations, which will be used to more fully describe various representative embodiments and can be used by those skilled in the art to better understand the representative embodiments disclosed and their inherent advantages. In these drawings, like reference numerals identify corresponding elements.

[004] FIG. 1 illustrates a neural network.

[005] FIG. 2 illustrates first layer weights in a visual image classification convolutional neural network.

[006] FIG. 3 illustrates a hardware accelerator for neural networks.

[007] FIG. 4 illustrates a hardware accelerator with a programmable module.

[008] FIG. 5 illustrates a flowchart of according to an embodiment of the disclosure.

[009] FIG. 6 illustrates a flowchart according to another embodiment of the disclosure.

[0010] While this disclosure is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure is to be considered as an example of the principles described and not intended to limit the disclosure to the specific embodiments shown and described. In the description below, like reference numerals are used to describe the same, similar or corresponding parts in the several views of the drawings.

[0011] In this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms“comprise”, "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by“comprises ...a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.

[0012] Reference throughout this document to "one embodiment",“certain

embodiments”, "an embodiment" or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.

[0013] The term“or” as used herein is to be interpreted as an inclusive or meaning any one or any combination. Therefore,“A, B or C” means“any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. An exception to this definition will occur only when a combination of elements, functions, operations or acts are in some way inherently mutually exclusive.

[0014] For simplicity and clarity of illustration, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Numerous details are set forth to provide an understanding of the embodiments described herein. The embodiments may be practiced without these details. In other instances, well-known methods, procedures, and components have not been described in detail to avoid obscuring the embodiments described. The description is not to be considered as limited to the scope of the embodiments described herein.

[0015] A“module” as used herein describes a component or part of a program or device that can contain hardware or software, or a combination of hardware and software. In module that includes software, the software may contain one or more routines, or subroutines. One or more modules can make up a program and/or device

[0016] Neural networks (NNs) are of interest in many fields, including science, commerce, medicine and industry since the networks can be given datasets where it is not known what relationships are inherent within the data and the NN can learn how to classify the data successfully. A NN is a combination of neurons that are connected together in varying configurations to form a network. The basic structure of a 2-input NN includes two input neurons disposed in a first or input layer. Each input neuron presents its output to three neurons disposed in a hidden layer. Hidden layer neurons in each present their output to a single neuron disposed in an output layer.

[0017] Neural networks find relationships within data, allowing the data to be classified, and then successfully classify input vectors, or patterns, that the NN was not exposed to during training. This powerful property is often referred to as the NNs' ability to

“generalize”. The input vectors that the NN was not exposed to during training are commonly referred to as unseen patterns or unseen input vectors. For NNs to be able to generalize they require training.

[0018] Thus, neural networks may comprise a set of layers, the first layer being an input layer configured to receive an input. The input layer includes neurons that are connected to neurons in a second layer, which may be referred to as a hidden layer. Neurons of the hidden layer may be connected to a further hidden layer, or an output layer.

[0019] In some neural networks, each neuron of a layer has a connection to each neuron in a following layer. Such neural networks are known as fully connected (FC) networks. The training data is used to let each connection to assume a weight that characterizes a strength of the connection. Some neural networks comprise both fully connected layers and layers that are not fully connected. Fully connected layers in a convolutional neural network may be referred to as densely connected layers.

[0020] Convolutional neural networks (CNN), are feed-forward neural networks that comprise layers that are not fully connected. In CNNs, neurons in a convolutional layer are connected to neurons in a subset, or neighborhood, of an earlier layer. This enables, in at least some CNNs, retaining spatial features in the input.

[0021] In some cases the data may not have been subject to any prior classification, and in these circumstances it is common to use unsupervised training, such as self-organizing maps, to classify the data. In other cases the data may have been previously broken into data samples that have been classified, and in these circumstances it is common to train a NN to be able to classify the additional unclassified data. In the latter case, a supervised learning algorithm is traditionally used. Classified input data examples have an associated output and during training, the NN learns to reproduce the desired output associated with the input vector.

[0022] During training a NN learns salient features in the data set it is trained with and can then“predict” the output of unseen input vectors. What the NN can classify depends on what the NN has been trained with.

[0023] As discussed earlier, training a NN to learn a data may require a long time to train as it is possible that the NN may never learn a data set. It is accepted that the time it takes to train a fixed-size NN may be exponential. For this reason, how long it takes to train a NN has become a standard of comparison between alternative training algorithms. An ideal training algorithm would require minimal exposure to training input vectors. The minimum possible exposure to training input vectors in the optimal situation would be to expose the NN to each input vector only once to be fully trained.

[0024] Most NN training algorithms find single numeric values that attempt to satisfy the training conditions, and learn by iteratively modifying weight values based on the error between the desired output of the NN and the actual output.

[0025] Neural networks and related systems can be represented as distributed processing elements that implement summation, multiplication, exponentiation or other functions on the elements incoming messages/signals. Such networks can be enabled and implemented through a variety of implementations. For example, a system may be implemented as a network of electronically coupled functional node components. The functional node components can be logical gates arranged or configured in a processor to perform a specified function. As a second example, the system may be implemented as a network model programmed or configured to be operative on a processor. The network model is preferably electronically stored software that encodes the operation and communication between nodes of the network. Neural networks and related systems may be used in a wide variety of applications and can use a wide variety of data types as input such as images, video, audio, natural language text, analytics data, widely distributed sensor data, or other suitable forms of data.

[0026] In particular, convolutional neural networks (CNNs) may be useful for performing inference on data for which feature recognition is independent of one or more dimensions of the data; for example, when detecting shapes in an image, the detected shapes are not dependent on their position in the image— the same features used to detect a square in one part of the image may be used to detect a square in another part of the image as well. These dimensions may be spatial (as in the 2D image example), but may additionally or alternatively be temporal or any suitable dimensions ( e.g ., a frequency dimension for audio or multi spectral light data).

[0027] Neural networks are used to map or classify a set of input patterns to a set of output patterns. Systems based on neural networks have evolved as a popular machine learning basis, and have been successfully employed in a wide variety of domains for practical applications. As would be understood, for classifying the input patterns with adequate correctness, the neural networks first need to do undergo a learning exercise which is called the training phase. During the training phase, paired training samples for example depicted as (x, y), of an input x and a corresponding output or label y, are provided to the neural network, which then learns or establishes how to associate or map the given input x with the correct output y.

[0028] FIG. 1 shows a neural network 100. The neural network 100 includes a plurality of layers connecting an input 102/104 to an output 130. While any number of component networks of any complexity can be employed, for the purposes of the present disclosure, a relatively simple neural network 100 is shown. In the case of neural network 100, there is an input image 104 and two convolution layers 106 (3X3 kernels) and 108 (7X7 kernels) that are applied before a fully connected layer 109 that provides a final output. The connected layer 109 includes interconnected nodes 110, 112, 114, 116 that are connected to nodes 118 and 120

[0029] Nodes 118 and 120 are produced by contracting the vertices to provide a simpler optimized graph with fewer nodes and all nodes with the same identifier label are merged and any parallel edges are removed.

[0030] Also shown in FIG. 1, input 122 is coupled to identifier labels A, B . . . C that have been applied to nodes 124, 126, 128, respectively, having coordinates (3C,l), (7C,2) . . . (4F,3), respectively.

[0031] Each node A (124), B (126), C (128) is provided with a structure label based on the properties of the corresponding layer in the original component network.

[0032] In FIG. 1,“C” stands for convolutional and“F” stands for fully connected. Node A 124, with“3C” means a convolutional layer with kernel size 3X3. The second property of each layer is its distance from its input in the graph. Thus, the node A 124 (3C, 1) means a convolutional layer with a 3X3 having a distance 1 from the input node. [0033] The neural network layers tend to look similar for tasks in related domains, e.g., image classification, especially near the front of the network. It is an embodiment of the disclosure to split the neural network into two parts, and“hard-code” the weights of some parts, typically to lower the power consumption of executing these workloads. Other parts can remain programmable. An optional cheap programmable part in parallel with the fixed component allows for fine-tuning the fixed part to suit new tasks. The hardcoding of a network layer applies to any layer of the network, regardless of whether the layer is a CNN layer, FC layer or other type of layer. Any layer may be fully hardcoded or partially hardcoded.

[0034] Neural networks (NNs) are currently an increasingly popular way to solve a range of challenging classification and regression problems. However, NN inference demands large compute and memory resources and therefore is challenging for power- constrained, battery powered devices. Dedicated hardware accelerators [1-4] can help close the gap, but NNs are still challenging to use within a mobile power budget.

[0035] An interesting aspect of NNs is that the weights learned for different datasets often have many similarities, especially in the same domain. For example, in visual image classification problems, the first convolutional layers typically resemble Gabor filters and color blobs, as shown in FIG. 2, elements 242 (a)...(n), where“n” is any suitable number for image 200 having dimensions 240 and 242. Exploiting this observation, transfer learning relates to the ability to use a set of NN weights trained on a first dataset (i.e., dataset A), for solving a problem on a second dataset (i.e., dataset B). In practice, transfer learning is often exploited in the case where there is not sufficient data to train a new network from scratch. In this case, it is an embodiment of this disclosure to start from a network trained on the first dataset A, and perform a“fine-tuning” operation to update weights in some or all of the layers using a second dataset B. This embodiment is optimized when the two datasets (dataset A and dataset B) are related.

[0036] In transfer learning, the most common approach is actually to reuse the convolutional layers from a network trained on similar data, and update the fully-connected layers using the new dataset. In this case, the weights for the“hard-wired” layers are fixed during the fine-tuning operation. This has an advantage that the hard-wired layers (at least the earlier layers) can be used to provide lower-level features that will typically be useful for other tasks in related domains.

[0037] This disclosure describes efficient implementation of domain-specific feature generators in fixed hardware, which can be used by a variety of different NN tasks in the transfer learning style. Since the hardware is fixed, rather than having programmable weights, it has power and performance advantages. Also described are techniques to provide fine-tuning, which increases the usefulness of the fixed feature generators for dissimilar tasks.

[0038] There are a number of ways of fixing parts of an NN in a hardware accelerator for the purpose of increasing power efficiency. These may include fixing, or updating, or modifying: the network architecture (structure); and/or kemels/filters; and/or layers; and/or whole network in hardware.

[0039] The hardcoding of layers, or partial hardcoding of layers, as described herein, may be applied to any suitable network layer that is amenable to being hardcoded, either fully or partially. These network layers include CNN layers, FC layers, or any other layer in any type of neural network.

[0040] The embodiments described herein apply to hard-coding any layer, or any portion of any layer. Thus, CNN, FC and any other neural layer types could be hard coded, either fully or partially. For example, a pre-trained CNN Alexnet™ uses 3 FC layers at the end of the network. The first two of the three FC layers could be hardcoded and the third (final) layer could be re-trained via transfer learning.

[0041] FIG. 3 shows a system 300 of a neural network. FIG. 3 includes an input image 302, interface connection 350, one or more fixed layers 352, which is shown as one or more CNN layers, function f(x) 354, intermediate F maps 356, a first set of one or more

programmable layer(s) 358, which is shown as CNN layers, and a second set of one or more programmable layers 360, which are shown as FC layers. The layers, as shown in FIG. 3, may be any suitable type of network layer(s) and the illustration of the CNN and FC layers is only one embodiment and other types of layer(s) may be used to achieve the result described herein.

[0042] NN architectures are commonly encountered in image classification tasks, without loss of generality. FIG. 3 is an example of the concept of fixing layers in hardware. The three main hardware components: 1.) a set of fixed, or established, CNN layers shown as element 352; 2.) a set of programmable CNN layers, shown as element 358; and 3.) a set of programmable fully-connected (FC) layers, shown as element 360.

[0043] The fixed, or established, or set, layers 352 have much greater power efficiency and performance than their programmable counterpart. The weights in the fixed part are chosen at design time to suit a certain application domain, such as visual image classification tasks. The more fixed layers 352 provided in the hardware, the higher the energy efficiency, but the generalization performance will eventually degrade due to task specificity. For some ubiquitous applications, it may be desirable to fix all the layers to generate a single task fixed hardware accelerator.

[0044] There is a significant PPA advantage for fixed NN layers. Since there is no storage requirement for the weights in these layers, there is no associated DRAM, SRAM and register power consumption for weights in these layers. Instead the weights are entirely encoded as fixed scalers in the datapath. These fixed scalers are significantly smaller and lower power than a full programmable multiplier, with the multiplicand supplied from a pipeline register. As an example, when comparing the number of MACs per Watt for programmable (“var 8b”) and fixed (“fix 8b”) datapaths, the input operand width is 8-bit and the accumulator is 32-bit wide. The process technology is TSMC l6nm and the clock frequency is lGHz. Depending on the width of the datapath, the power efficiency advantage is about 2x-6x for a fixed datapath.

[0045] Beyond the fundamental hardware savings, there are also a number of other optimizations that are unlocked only for fixed layers. Techniques such as quantization, pruning small weights and sharing common sub-expressions, can all be used to a much greater extent than with programmable weights, since each individual weight can be optimized as required, rather than having to optimize for the worst case requirements of all weights. Fully exploiting these additional optimizations leads to even greater gains.

[0046] Obviously, fixing, or establishing, or setting, one or more NN layers means that said layer(s) cannot be changed over time. This may become a limitation as NN applications change over time. To maximize the utility of the fixed layers for unknown future NN requirements, it may be desirable to provide a hardware mechanism to implement something similar to the fine-tuning operation that is used in transfer learning.

[0047] FIG. 4 shows the basic arrangement 400, where there is a small programmable segment 462 that operates in parallel with the fixed layer(s) 452. The output of the programmable segment in the fixed layers is concatenated with the fixed outputs (feature maps). Aternatively, the parallel programmable part 462 can be combined using residual connections, where they would be added to the fixed layer outputs. The weights in the parallel programmable part should preferably be very sparse, such that very little computation or storage overhead is encountered.

[0048] Specifically, FIG. 4 shows system 400 that includes an input image 402 that is provided to one or more fixed layer(s) 452, shown as CNN layer(s), via interface 450. Image input 402 is provided to programmable module 462. [0049] Fixed layer(s) 452 generate intermediate F maps 456 as a function of f(x).

Programmable module 462 generates concatenate F maps 464. Intermediate F maps 456 are provided to programmable layer(s) 458, shown as CNN layer(s) and then to programmable layer(s), shown as FC layer(s) 460.

[0050] FIG. 5 shows a process, flowchart, or algorithm, 500. Flowchart 500 is a diagram that represents an algorithm, workflow or process that may interact with hardware components and modules. The flowchart 500 shows the operations as boxes of various kinds, and their order.

[0051] The series of operations 500 may be stored on non-volatile memory or other suitable memory or computer-usable medium and in any suitable programming language.

[0052] Any combination of one or more computer-usable or computer-readable medium(s) may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CDROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device.

[0053] The computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if desired, and then stored in a computer memory. In the context of this document, a computer-usable or computer- readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer-usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wire line, optical fiber cable, RF, etc.

[0054] Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object- oriented programming language such as Java, Smalltalk, C++, C# or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).

[0055] The present disclosure is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus, systems and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.

[0056] These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, or other programmable data processing apparatus, create means for

implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus, to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.

[0057] The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operations to be performed on the computer, or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus, provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

[0058] As shown in FIG. 5, the algorithm 500 begins (502). A first neural network is accessed (504) and includes various layers (506). These layers (506) may include one or more of fixed, or established, or set, layer(s) (FIG. 3, element 352), programmable CNN layer(s) (FIG. 3, element 358), and programmable FC layer(s) (FIG. 3, element 360). [0059] The first neural network is trained on a first dataset (508). The training of the network may be accomplished utilizing a suitable training routine. The first dataset (510) may be accessed from a remote location, or may be local.

[0060] A first set of weights is generated (512). The first set of weights is generated by training the first neural network on a first dataset, and particularly, training various layer(s) of the first neural network on the first dataset. The selection of the layer(s) that are trained is based on the characteristics, for example, domain of the layer(s).

[0061] A subsequent dataset is accessed (514). The subsequent dataset may be a second dataset, or any number of many datasets. The second, or subsequent, dataset may be accessed (516) from a remote location or a local location, or any suitable location. The use of additional datasets are used to generate subsequent sets of weights, as described herein.

[0062] The first set of weights is modified (518). This modification of the first set of weights is typically some of the weights, based on the layer and dataset. While it is envisioned that many of the first weights will be modified, many others of the first set of weights will not be modified. The non-modification of one or more of the first weights is a function of the neural network.

[0063] A subsequent set of weights is generated (520). The second set, or any number of subsequent sets, of weights are based on a second dataset, or any number of subsequent datasets. Line 530 shows that a second, or subsequent, neural network is trained using the second, or subsequent, dataset and a second, or subsequent, set of weights (532).

[0064] Following generation of a subsequent set of weights being generated (520), two other operations, or functions, may occur. These include identifying programmable layer(s) of a neural network (524), identifying fixed layer(s) (526) and fixing selected weights for one or more fixed layers (528).

[0065] The two operations (524) and (526/528) are used to“freeze” some layer(s) of the neural network, while permitting programming of other layer(s) of the neural network. The portion of the frozen layer(s) to the programmable layers(s) depends on the neural network parameters. These alternatives (frozen portion and programmable portion) that while increasing the efficiency and speed of the process, are not demanded by the process.

[0066] As stated above, a second, or subsequent, neural network is trained using the subsequent dataset and subsequent weights (532). Thus, the portion of the neural network that is to be fixed, or frozen is“retrained” with an awareness that the fixed portion will be frozen in hardware (examples include quantization of weights, sparsification to remove weights, directing values towards lower hamming-distances from 0 to yield smaller hardcoded multiplication structures in an ASIC, etc.).

[0067] A determination is made whether there are additional networks to be trained (534). If so (536) a subsequent dataset is accessed (514). If not (538), the process ends (540).

[0068] FIG. 6 shows a process, flowchart, or algorithm, 600. Flowchart 600 is a diagram that represents an algorithm, workflow or process that may interact with hardware components and modules. The flowchart 600 shows the operations as boxes of various kinds, and their order.

[0069] As shown in FIG. 6, the algorithm 600 begins (602). Input data is accessed (604). The input data may be image data, voice data or any type of data suitable for processing.

[0070] A first neural network is accessed (606) and includes one or more layer(s) (608). These layers may include one or more of fixed, or established, or set, layer(s) (610) and one or more programmable layer(s) (612).

[0071] The fixed layer(s) may be hard-wired, or frozen such that these fixed layers are not programmable. The fixed layer(s) of the neural network may be the result of being trained or learning from a one or more other neural network(s). The fixed layer(s) generate a first set of maps, such as intermediate F maps (614). The first maps are generated as a result of the operations of the fixed layer(s).

[0072] The programmable layer(s) may be programmed and are not hard-wired as the fixed layer(s) are. The programmable layer(s) generate a second set of maps, such as concatenate F maps (616). The second maps are generated as a result of the operations of the programmable layer(s)

[0073] The first map(s), such as intermediate F maps and the second map(s), such as concatenate F maps are provided to subsequent neural network layer(s) (618). These subsequent neural network layers may be programmable layer(s), such as CNN and/or FC layer(s). One embodiment of the disclosure is that the subsequent network layer(s) are programmable layer(s). This permits the fixed, or frozen, portion of the network to be hard- wired in a segment of the network.

[0074] A determination is made whether there is additional input data (620). If so (622), the additional input data is accessed (604). If not (624), the algorithm ends (626).

[0075] The computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if desired, and then stored in a computer memory. In the context of this document, a computer-usable or computer- readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer-usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wire line, optical fiber cable, RF, etc.

[0076] The present embodiments are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus, systems and computer program products according to embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.

[0077] Unless stated otherwise, terms such as "first" and "second" are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements.

[0078] As will be appreciated by one skilled in the art, the disclosure may be embodied as a system, method or computer program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, the embodiments may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.

[0079] It can be seen that the system and methodologies presented herein provide an advancement in the state of the art. [0080] Accordingly, some of the disclosed embodiments are set out in the following.

[0081] One embodiment is directed to a method, (“the Method”) comprising: accessing a first neural network that has one or more layers; training the first neural network on a first dataset; generating a first set of weights associated with one or more layers of the first neural network, the first set of weights based on the training of the first neural network on the first dataset; accessing a second dataset; modifying selected ones of the first set of weights to generate a second set of weights, based on the second dataset; and utilizing the second set of weights to train a second neural network.

[0082] Another embodiment is directed to the Method comprising identifying similarities of the first set of weights and the second set of weights.

[0083] Another embodiment is directed to the Method determining that the first dataset is the same domain as the second dataset.

[0084] Another embodiment is directed to the Method comprising identifying one or more programmable layers.

[0085] Another embodiment is directed to the Method identifying one or more layers in the first neural network; and utilizing the identified one or more layers in the first neural network in the second neural network.

[0086] Another embodiment is directed to the Method further comprising updating one or more weights associated with the one or more layers of the first neural network.

[0087] Another embodiment is directed to the Method identifying selected layers of the first neural network having particular connectivity properties; and updating the identified selected layers of the first neural network based on the second neural network.

[0088] Another embodiment is directed to an apparatus (“the Apparatus”) comprising: a first neural network that has one or more layers; a first dataset that is used to train the first neural network; a first set of weights associated with one or more layers of the first neural network, the first set of weights generated based on the training of the first neural network on the first dataset; a second dataset; a second set of weights generated by modifying selected ones of the first set of weights and the second set of weights based on the second dataset; and a second neural network trained based on the second set of weights.

[0089] Another embodiment is directed to the Apparatus where the first set of weights and the second set of weights are similar.

[0090] Another embodiment is directed to the Apparatus where the first dataset is the same domain as the second dataset. [0091] Another embodiment is directed to the Apparatus where the one or more layers include one or more programmable layers.

[0092] Another embodiment is directed to the Apparatus where the one or more layers include one or more layers in the first neural that are used in the second neural network.

[0093] Another embodiment is directed to the Apparatus comprising one or more weights associated with the one or more layers of the first neural network.

[0094] Another embodiment is directed to the Apparatus where selected layers of the first neural network have particular connectivity properties.

[0095] Another embodiment is directed to a system (“the System”) comprising: a memory; and a processor, coupled to the memory, that executes instructions stored in the memory, the instructions comprising: accessing a first neural network that has one or more layers; training the first neural network on a first dataset; generating a first set of weights associated with one or more layers of the first neural network, the first set of weights based on the training of the first neural network on the first dataset; accessing a second dataset; modifying selected ones of the first set of weights to generate a second set of weights, based on the second dataset; and utilizing the second set of weights to train a second neural network.

[0096] Another embodiment is directed to the System where the instructions further comprise identifying similarities of the first set of weights and the second set of weights.

[0097] Another embodiment is directed to the System where the instructions further comprise determining that the first dataset is the same domain as the second dataset.

[0098] Another embodiment is directed to the System where the instructions further comprise identifying one or more programmable layers.

[0099] Another embodiment is directed to the System where the instructions further comprise: identifying one or more layers in the first neural network; and utilizing the identified one or more layers in the first neural network in the second neural network.

[00100] Another embodiment is directed to the System where the instructions further comprise updating one or more weights associated with the one or more layers of the first neural network.

[00101] Another embodiment is directed to a method comprising: accessing input data; accessing a neural network; identifying one or more first layers of the neural network; parsing the one or more first layers of the neural network into a fixed portion and a programmable portion; generating one or more first maps as a function of the input data and the fixed portion of the one or more first layers of the neural network; generating one or more second maps as a function of the input data and the programmable portion of the one or more first layers of the neural network; utilizing the one or more first maps and the one or more second maps with subsequent layers of the neural network.

[00102] The various representative embodiments, which have been described in detail herein, have been presented by way of example and not by way of limitation. It will be understood by those skilled in the art that various changes may be made in the form and details of the described embodiments resulting in equivalent embodiments that remain within the scope of the appended claims.