Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR NEURAL NETWORK PROCESSING
Document Type and Number:
WIPO Patent Application WO/2024/054125
Kind Code:
A1
Abstract:
A method and system (100) for neural network processing. The system (100) includes: a parameter generator (110), configured to obtain extension parameters according to problem statement parameters of a model, and obtain a first set of input parameters by applying the extension parameters to collocation points of the model; and a neural network (120), configured to run the first set of input parameters to obtain a solution of a differential equation corresponding to the model. The extension parameters with model characteristics are used to obtain the first set of input parameters when the neural network starts to run, and thus the efficiency of solving the differential equation can be improved.

Inventors:
EGOROVA EKATERINA DMITRIEVNA (RU)
DAVYDOV DANIL VALERIEVICH (RU)
SMORKALOV MIKHAIL EVGENIEVICH (RU)
MALKHANOV ALEXEY OLEGOVICH (RU)
HUO WENBIN (RU)
Application Number:
PCT/RU2022/000270
Publication Date:
March 14, 2024
Filing Date:
September 05, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
EGOROVA EKATERINA DMITRIEVNA (RU)
International Classes:
G06N3/08; G06F17/13
Other References:
DIAB W ABUEIDDA ET AL: "Enhanced physics-informed neural networks for hyperelasticity", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 May 2022 (2022-05-24), XP091234000
SIFAN WANG ET AL: "On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 18 December 2020 (2020-12-18), XP081841069
Attorney, Agent or Firm:
LAW FIRM "GORODISSKY & PARTNERS" LTD. (RU)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for neural network processing, comprising: a parameter generator, configured to obtain extension parameters according to problem statement parameters of a model, and obtain a first set of input parameters by applying the extension parameters to collocation points of the model; and a neural network, configured to run the first set of input parameters to obtain a solution of a differential equation corresponding to the model.

2. The system according to claim 1, wherein the parameter generator is specifically configured to: decompose the problem statement parameters using basis functions to obtain the extension parameters.

3. The system according to claim 2, wherein the parameter generator is specifically configured to: select one or more functions from the decomposed problem statement parameters as the extension parameters.

4. The system according to claim 2, wherein the parameter generator is specifically configured to: construct one or more coefficient vectors from the decomposed problem statement parameters as the extension parameters.

5. The system according to any one of claims 2 to 4, wherein the problem statement parameters comprise at least one non-zero parameter of the following parameters related to the model: boundary conditions, initial conditions, a source, or geometry.

6. The system according to any one of claims 1 to 5, wherein after the neural network runs the first set of input parameters, the parameter generator is configured to: adjust the extension parameters, and obtain a second set of input parameters by applying the adjusted extension parameters to the collocation points; and the neural network is configured to: run the second set of input parameters to obtain the solution of the differential equation corresponding to the model.

7. A method for neural network processing, comprising: obtaining extension parameters according to problem statement parameters of a model; obtaining a first set of input parameters by applying the extension parameters to collocation points of the model; and running the first set of input parameters to obtain a solution of a differential equation corresponding to the model.

8. The method according to claim 7, wherein the obtaining the extension parameters comprises: decomposing the problem statement parameters using basis functions to obtain the extension parameters.

9. The method according to claim 8, wherein the obtaining the extension parameters comprises: selecting one or more functions from the decomposed problem statement parameters as the extension parameters.

10. The method according to claim 8, wherein the obtaining the extension parameters comprises: constructing one or more coefficient vectors from the decomposed problem statement parameters as the extension parameters.

11. The method according to any one of claims 8 to 10, wherein the problem statement parameters comprise at least one non-zero parameter of the following parameters related to the model: boundary conditions, initial conditions, a source, or geometry.

12. The method according to any one of claims 8 to 13, further comprising, after running the first set of input parameters, adjusting the extension parameters; obtaining a second set of input parameters by applying the adjusted extension parameters to the collocation points; and running the second set of input parameters to obtain the solution of the differential equation corresponding to the model.

13. A computer readable storage medium having instructions which, when run on a computer, cause the computer to perform the method according to any one of claims 7 to 12.

14. An electronic device, comprising a memory and a processor, wherein the memory is configured to store a computer program, and the processor is configured to invoke the computer program from the memory and run the computer program, so that a computer on which a chip is disposed performs the method according to any one of claims 7 to 12.

15. A computer program product which, when runs on a computer, causes the computer to perform the method according to any one of claims 7 to 12.

Description:
METHOD AND SYSTEM FOR NEURAL NETWORK PROCESSING

TECHNICAL FIELD

The present application relates to the field of data processing, and in particular, to a method and system for neural network processing.

BACKGROUND

A neural network technology is used in multiple fields to solve technical problems, such as the problem of solving a partial differential equation (PDE) that characterizes the physics, engineering, finance, etc. Physics informed neural network (PINN) is proposed for its excellent performance in solving a high dimensional PDE. The neural network is restricted to satisfy the physics by the PDE and conditions by specifying a loss function. Moreover, the efficiency of solving a PDE is strongly depend on the conditions, which make a limitation that prevents the widespread use of PINN.

SUMMARY

Embodiments of the present application provide a method and system for neural network processing, which can improve the efficiency of solving differential equations.

In a first aspect, a system for neural network processing is provided, and the system includes: a parameter generator, configured to obtain extension parameters according to problem statement parameters of a model, and obtain a first set of input parameters by applying the extension parameters to collocation points of the model; and a neural network, configured to run the first set of input parameters to obtain a solution of a differential equation corresponding to the model.

Optionally, the differential equation is a partial differential equation, which can be used to describe the model, such as, but not limited to, a vehicle, a bridge, fluid, or any other object that can be evaluated for various characteristics thereof.

Optionally, the collocation points are sampled inside the model domain, including spatial coordinates and/or temporal coordinates.

The problem statement parameters can be used to represent conditions of the model, which is crucial for solving differential equations. The differential equations may have an infinite number of solutions without conditions. In other words, the process of solving differential equations can be understood as the process of finding the results (such as the output of the neural network), which satisfy the equations and the conditions by training collocation points. The problem statement parameters may also be referred to as task parameters or task setting parameters. i According to the system for neural network processing provided in the first aspect, the input of the neural network is not only the collocation points, but a set of input parameters which is obtained by applying extension parameters to the collocation points, and has a higher dimension than the collocation points. The extension parameters are related to the problem statement parameters of the model. The extension parameters with model characteristics are used to obtain the set of input parameters when the neural network starts to run, and thus the efficiency of solving differential equations can be improved.

It should be noted that, the collocation points typically do not contain model characteristics. If the collocation points are used as the input of the neural network separately, more computations may be required to satisfy the equations and conditions.

In some implementation manners, the parameter generator is configured to decompose the problem statement parameters related to the model using basis functions to obtain the extension parameters.

It should be noted that, the extension parameters are obtained by decomposing the problem statement parameters, which contain the problem specific features of the model, and thus the efficiency of solving differential equations can be improved.

In some implementation manners, the parameter generator is configured to select one or more functions from the decomposed problem statement parameters as the extension parameters.

Optionally, the parameter generator is configured to select one or more functions according to contribution.

It should be noted that, the accuracy of the solution strongly depends on the formulation of the model. In most cases, the model should be handcrafted on a case-by-case basis to achieve acceptable accuracy, which makes the physics informed neural network not very scalable. In the present disclosure, the selected one or more functions, as a part of the neural network input, can reduce the need for handcraft situation, and thus the system can achieve acceptable accuracy for different cases.

For example, the selected N functions are functions with maximal factorization coefficients among the functions from the decomposed problem statement parameters, where N is greater than or equal to 1. In some implementation manners, the parameter generator is configured to construct one or more coefficient vectors from the decomposed problem statement parameters as the extension parameters.

It can be understood that, the form of the coefficient vectors is amplitudes multiplied by basis functions. Moreover, the coefficient vectors are obtained by decomposing the problem statement parameters. So the coefficient vectors could show how well exact the function approximates the original function.

It should be noted that, the neural network needs to be re-trained when the model statement changes usually, and the re-training process may take a lot of time. In the present disclosure, the constructed coefficient vectors, as a part of the neural network input, can avoid the re-training process or reduce the time of the re-training process.

In some implementation manners, the problem statement parameters include at least one nonzero parameter of the following parameters related to the model: boundary conditions, initial conditions, a source, or geometry.

It should be noted that, boundary conditions, initial conditions, a source, or geometry can be used to approximate the solution of the model, and thus the efficiency of solving differential equations can be improved.

It should be noted that, there is no limitation on the basis functions which can be used to expose the problem statement parameters and manners of decomposition. For example, the basis functions are trigonometric functions, polynomial functions or Bessel functions, this is not limited thereto. For example, the trigonometric function decomposition can be done by Fourier transform, and the polynomial function decomposition can be performed by approximating problem statement parameters with polynomials of a given degree. Therefore, dimensionality of the neural network input can be extended, and the efficiency of solving differential equations can be improved.

In some implementation manners, after the neural network runs the first set of input parameters, the parameter generator is configured to: adjust the extension parameters, and obtain a second set of input parameters by applying the adjusted extension parameters to the collocation points; and the neural network is configured to: run the second set of input parameters to obtain the solution of the differential equation corresponding to the model.

For example, the parameter generator can fine-tune the extension parameters, e.g. the functions or the coefficient vectors, which are applied to the collocation points and therefore change parameters which extend the dimensionality of the neural network.

It should be noted that, the parameter generator could adjust the extension parameters several times, that is, the neural network could run the adjusted set of input parameters several times during training. There is no limitation on the number of times the parameter generator adjusts the extension parameters.

It should be noted that, the system can adjust the extension parameters during the running process, and the accuracy and efficiency of solving differential equations can be improved.

Hereinafter, a second aspect provides a method for neural network processing corresponding to the system for neural network processing in the first aspect. For the content that is not described in detail, reference may be made to the above system embodiments in the first aspect, which will not be repeated redundantly here.

In the second aspect, a method for neural network processing is provided, and the method includes: obtaining extension parameters according to problem statement parameters of a model; obtaining a first set of input parameters by applying the extension parameters to collocation points of the model; and running the first set of input parameters to obtain a solution of a differential equation corresponding to the model.

The method may be executed by the system for neural network processing provided in the first aspect.

It should be understood that, the input of a neural network includes not only spatial coordinates, but also the extension parameters. Moreover, the extension parameters are related to the problem statement parameters of the model. The extension parameters with model characteristics are used as the input of the neural network when the neural network starts to run, and thus the efficiency of solving differential equations can be improved.

According to the system for neural network processing provided in the first aspect, the input of the neural network is not only the collocation points, but a set of input parameters which is obtained by applying extension parameters to the collocation points, and has a higher dimension than the collocation points. The extension parameters are related to the problem statement parameters of the model. The extension parameters with model characteristics are used to obtain the set of input parameters when the neural network starts to run, and thus the efficiency of solving differential equations can be improved.

In some implementation manners, the obtaining the extension parameters includes: decomposing the problem statement parameters using basis functions to obtain the extension parameters.

In some implementation manners, the obtaining the extension parameters includes: selecting one or more functions from the decomposed problem statement parameters as the extension parameters.

In some implementation manners, the obtaining the extension parameters includes: constructing one or more coefficient vectors from the decomposed problem statement parameters as the extension parameters.

In some implementation manners, the problem statement parameters include at least one nonzero parameter of the following parameters related to the model: boundary conditions, initial conditions, a source, or geometry. In some implementation manners, the method further includes: after running the first set of input parameters, adjusting the extension parameters; obtaining a second set of input parameters by applying the adjusted extension parameters to the collocation points; and running the second set of input parameters to obtain the solution of the differential equation corresponding to the model.

For the relevant explanations and beneficial effects of the method for neural network processing provided in the second aspect, corresponding reference may be made to the description in the first aspect, which will not be repeated redundantly here.

Hereinafter, a third aspect provides a device for neural network processing corresponding to the system for neural network processing in the first aspect. For the content that is not described in detail, reference may be made to the above system embodiments in the first aspect, which will not be repeated redundantly here.

In the third aspect, a device for neural network processing is provided, and the device includes: a decomposition unit configured to decompose problem statement parameters related to a model using basis functions; a selection unit configured to select one or more functions from the decomposed problem statement parameters as extension parameters; and a generation unit configured to obtain a first set of input parameters by applying the extension parameters to collocation points of the model.

Hereinafter, a fourth aspect provides a device for neural network processing corresponding to the system for neural network processing in the first aspect. For the content that is not described in detail, reference may be made to the above system embodiments in the first aspect, which will not be repeated redundantly here.

In the fourth aspect, a device for neural network processing is provided, and the device includes: a decomposition unit configured to decompose problem statement parameters related to a model using basis functions; a selection unit configured to select one or more coefficient vectors from the decomposed problem statement parameters as extension parameters; and a generation unit is configured to obtain a first set of input parameters by applying the extension parameters to collocation points of the model.

In a fifth aspect, an embodiment of the present application provides an electronic device, and the electronic device has a function of implementing the method in the second aspect. The function may be implemented by hardware, or may be implemented by hardware executing the corresponding software. The hardware of the software includes one or more modules corresponding to the function.

In a sixth aspect, an embodiment of the present application provides a computer readable storage medium having instructions which, when run on a computer, cause the computer to perform the method in the second aspect or any possible implementation manner of the second aspect.

In a seventh aspect, an electronic device is provided, and the electronic device includes a processor and a memory. The processor is connected to the memory. The memory is configured to store instructions, and the processor is configured to execute the instructions. When the processor executes the instructions stored in the memory, the processor is caused to perform the method in the second aspect or any possible implementation manner of the second aspect.

In an eighth aspect, a chip system is provided, and the chip system includes a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to invoke the computer program from the memory and run the computer program, so that a server on which a chip is disposed performs the method in the second aspect or any possible implementation manner of the second aspect.

In a ninth aspect, an embodiment of the present application provides a computer program product which, when runs on an electronic device, causes the electronic device to perform the method in the second aspect or any possible implementation manner of the second aspect.

Based on the above description, according to the method, device and system for neural network processing provided in the embodiments of the present disclosure, the efficiency of solving differential equations can be improved.

BRIEF DESCRIPTION OF DRAWINGS

One or more embodiments are exemplarily described by corresponding accompanying drawings, and these exemplary illustrations and accompanying drawings constitute no limitation on the embodiments. Elements with the same reference numerals in the accompanying drawings are illustrated as similar elements, and the drawings are not limiting to scale, in which:

FIG. 1 is a schematic block diagram of a system for neural network processing provided in an embodiment of the present disclosure;

FIG. 2 is a schematic block diagram of execution by a parameter generator provided in an embodiment of the present disclosure;

FIG. 3 is a schematic block diagram of a parameter generator provided in an embodiment of the present disclosure;

FIG. 4 is a schematic block diagram of execution in first way by a parameter generator provided in an embodiment of the present disclosure;

FIG. 5 is a schematic block diagram of execution in second way by a parameter generator provided in an embodiment of the present disclosure;

FIG. 6 is a schematic flowchart of a method for neural network processing provided in an embodiment of the present disclosure;

FIG. 7 is a schematic flowchart of a method implemented in first way for neural network processing provided in an embodiment of the present disclosure; and

FIG. 8 is a schematic flowchart of a method implemented in second way for neural network processing provided in an embodiment of the present disclosure.

DESCRIPTION OF EMBODIMENTS

In order to understand features and technical contents of embodiments of the present disclosure in detail, implementations of the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, and the attached drawings are only for reference and illustration purposes, and are not intended to limit the embodiments of the present disclosure. In the following technical descriptions, for ease of explanation, numerous details are set forth to provide a thorough understanding of the disclosed embodiments. One or more embodiments, however, may be practiced without these details. In other cases, well-known structures and apparatuses may be shown simplified in order to simplify the drawings.

1. Differential equations

Multiple technical problems can be modeled and analyzed with differential equations. One example differential equation is a partial differential equation (PDE). A PDE can be used to describe or model a variety of objects, such as, but not limited to, a vehicle, a bridge, fluid, or any other object that can be evaluated for various characteristics thereof. A PDE is a differential equation that contains multivariable functions and partial derivatives thereof.

For example, a PDE for a function u x\jci,... pea) and boundary conditions (BCs) are described as the following function:

B(u,x) = 0,x 6 dSl where d£2 is a boundary of a domain 1 . It should be noted that initial conditions (ICs) can be supposed to be the special type of boundary conditions.

It is understandable that problem statement can also include the source, geometry, etc. There is no specific limitation on this in an embodiment of the present disclosure.

2. Physics informed neural network (PINN)

PINN can be applied to different types of PDEs. PINN embeds a PDE into the loss of the neutral network (NN) using automatic differentiation. A neural network u(x; G) with parameters G can be constructed to perform automatic differentiation. u(x; 0) can be seen as a surrogate of the solution u(%), which takes the input x and outputs a vector with the same dimension as u. Here, G = W l , b l ), l < I < L is the set of all weight matrices and bias vectors in the neural network u. The main task of PINN is to find a neural network satisfying both the PDE equation and conditions, which include boundary conditions, initial conditions, a source, geometry, etc. Two training sets 7f and 7b are constructed for a differential operator and conditions, which leads to the appearance of two terms in loss function. The differential operator is trained to be close to zero in collocation points (points from inside the domain) and neural network approximation of the function should be close to the conditions, where the loss function can be described as:

The neural network u is restricted to satisfy the physics imposed by the PDE and conditions by specifying the loss function. Then, a good G can be searched by minimizing the loss function L.

The efficiency of PINN strongly depends on construction of the NN-PDE system. The formulations of the problem (on the ICs, BCs, a source, geometry, etc.) have a strong influence on the accuracy of the solution. Moreover, the training procedure also has a large impact on efficiency.

Therefore, a method and system for neural network processing are disclosed in this application, which can improve the efficiency of solving differential equations.

FIG.l illustrates a PINN system 100 for solving a differential equation with an example embodiment. As shown in FIG.l, the system 100 includes a parameter generator 110 and a NN 120.

The parameter generator 110 is configured to obtain extension parameters according to problem statement parameters of a model, and apply the extension parameters to collocation points of the model to obtain a first set of input parameters.

The NN 120 is configured to run the first set of input parameters to obtain a solution of a differential equation corresponding to the model.

In general, the PINN system 100 solves the differential equation with the help of neural networks using a physics informed approach. The dimensionality of the NN input can be automatically extended by adding the extension parameters related to the problem statement parameters of the task which is to be solved. Thereby, the efficiency for solving the differential equation can be improved. With reference to FIG. 1, optionally, the differential equation is a partial differential equation, which can be used to describe the model, such as, but not limited to, a vehicle, a bridge, fluid, or any other object that can be evaluated for various characteristics thereof. The model can be of various types that can be analyzed with one or more PDEs. The partial differential equation corresponding to the model can be represented as technical problems about the model. For example, methods of the present disclosure can be evaluated for various characteristics of the physical problems such as movement, displacement, variation, or any change in sound, heat, electrostatics, fluid flows, elasticity, quantum mechanics, heat transfer, and other dynamic characteristics of the models, and this is not limited thereto.

The problem statement parameters, such as, but not limited, boundary conditions, initial conditions, a source, or geometry, can be used to interpret various characteristics of the model. For example, when a rigid body is analyzed, a position of the rigid body is descried by six variables, and the dynamics of the rigid body take place in a finite-dimensional configuration space. When fluid is modeled, a configuration of a fluid occurs in an infinite-dimensional configuration space.

The parameter generator 110 can generate collocation points by spatial and time coordinates and obtain extension parameters according to the problem statement parameters.

It should be understood that, the input of the neural network is not only the collocation points, but a set of input parameters which is obtained by applying extension parameters to the collocation points, and has a higher dimension than the collocation points. The extension parameters are related to the problem statement parameters of the model. The extension parameters with model characteristics are used to obtain the set of input parameters when the neural network starts to run, and thus the efficiency of solving differential equations can be improved.

Optionally, after the NN 120 runs the first set of input parameters, the parameter generator 110 is configured to adjust the extension parameters, and apply the adjusted extension parameters to the collocation points to obtain a second set of input parameters; and the NN 120 is configured to run the second set of input parameters to obtain the solution of the differential equation corresponding to the model.

Optionally, after one or more iterations of NN processing, the NN 120 can generate a set of output parameters, and the set of out parameters can be used for updating the input of the NN 120 during the solving process. For example, the parameter generator can fine-tune the extension parameters, e.g. the functions or the coefficient vectors, which are applied to the collocation points and therefore change parameters which extend the dimensionality of the neural network.

It should be noted that, the parameter generator could adjust the extension parameters several times, that is, the neural network could run the adjusted set of input parameters several time during training. There is no limitation on the number of times the parameter generator adjusts the extension parameters.

It should be noted that, the system can adjust the extension parameters during the running process, and the accuracy and efficiency of solving differential equations can be improved.

For example, the parameter generator 110 can correct the first set of input parameters into a new second set of input parameters according to the accuracy of output parameters. For another example, the parameter generator 110 can add new parameters to the first set of input parameters to generate a new set of input parameters according to the accuracy of output parameters. Therefore, the system 100 can fine-tune the extension parameters during running process, and the accuracy and efficiency of solving differential equation can be improved.

Hereinafter, the manner of generating the first set of input parameters by the parameter generator 110 is further disclosed.

FIG. 2 is a schematic block diagram of parameter generator 110 processing provided in an embodiment of the present disclosure.

The parameter generator 110 can extend the dimensionality of the NN 120 input by generating a first set of input parameters, which is not only the collocation points, but a set of input parameters which is obtained by applying extension parameters to the collocation points.

Generally, the spatial coordinates (x, t) can be randomly sampled in a spatial domain according to a uniform distribution. There is no specific limitation on this in an embodiment of the present disclosure.

There are two main ways to obtain the extension parameters depending on what these parameters represent. In the first way the extension parameters can. be obtained based on basis functions themselves applied to the collocation points; and in the second way the extension parameters can be obtained based on the coefficients of decomposition into the basis functions. The acceptable accuracy for different problem settings can be achieved by resolving the problem of automated neural network model construction in the first way. Moreover, the generalization capabilities can be improved by resolving the problem of automated selection of coefficients to parametrize the problem. The two ways can be executed by the parameter generator 110. For example, FIG.3 is a block diagram of an example of the parameter generator 110 illustrated and described with reference to FIG.l and FIG.2.

As shown in FIG. 3, some examples of the parameter generator 110 include a decomposition unit 111, selection unit 112 and generation unit 113.

When the parameter generator 110 is applied in the first way, the decomposition unit 111 can be configured to decompose the problem statement parameters by the basis functions, where the problem statement parameters include at least one non-zero parameter of the following parameters: boundary conditions, initial conditions, a source, or geometry. The problem statement parameters can also include any other problem parameters which describe the specific features of the model. It should be noted that, the problem statement parameters can be used to approximate the solution of the problem, and thus the efficiency of solving differential equations can be improved.

It should be noted that, there is no limitation on the basis functions which can be used to expose the problem statement parameters and manners of decomposition. For example, the basis functions can be trigonometric functions, polynomial functions or Bessel functions, etc. Optionally, the trigonometric function decomposition can be done by Fourier transform, and the polynomial function decomposition can be performed by approximating problem setting parameters with polynomials of a given degree. It should be noted that, all of the trigonometric function, the polynomial function and the Bessel function can be used to expose the problem statement parameters, thus extend the dimensionality of the neural network input, and the efficiency of solving differential equations can be improved.

The selection unit 112 can be configured to select one or more functions from the decomposed problem statement parameters as the extension parameters.

Optionally, the selection unit 112 is configured to select one or more functions according to contribution.

For example, the selection unit 112 selects N functions according to the contribution to the approximation (fi(x, t), f2(x, t), ..., f j(x, t)), where N is a positive integer.

For example, the selected N functions are functions with maximal factorization coefficients among the functions from decomposed problem statement parameters.

Optionally, in the first experiment, N can be equal to 1 or 2; or the number of functions can be selected in such a way as to provide an approximation of the problem statement with the required accuracy, for example, to achieve accuracy 95% for BC approximation it’s enough to use ten basis functions, so N will be equal to 10, if our target in accuracy is 95%. Thereby, the efficiency of solving differential equations can be improved.

Optionally, after each training, the accuracy of the output is checked. If the accuracy is less than a preset value (e.g. 90%), other M functions are selected according to factorization coefficients. M is a positive integer, for example, M is equal to 1 or 2.

For example, in the first experiment, the selection unit 112 selects N functions, and after approximation, the selection unit 112 selects functions with maximal factorization coefficients, starts with one function and checks the accuracy. If the accuracy is less than 90%, one more function is added, and the accuracy is checked again and so on. The generation unit 113 can be configured to extend the dimensionality of the NN input with the selected one or more functions applied to the collocation points.

For example, the first set of input parameters includes x’, t’, gi(x’, t’), g2(x’, t’), ... gN(x’, t’), where the points (x’, t’) are the coordinates, and gi(x’, t’), g2(x’, t’), ... gN(x’, t’) are the selected N functions.

It should be noted that, the accuracy of the solution strongly depends on the formulation of the model. In most cases, the model should be handcrafted on a case-by-case basis to achieve better accuracy, which makes the physics informed neural network not very scalable. In the present disclosure, the selected one or more functions, as a part of the neural network input, can reduce the need for handcraft situation, and thus the system can achieve acceptable accuracy for different cases.

When the parameter generator 110 is applied in the second way, the decomposition unit 111 can be configured to decompose the problem statement parameters by the basis functions.

The decomposition process is similar to that of the first way, which will not be repeated redundantly here.

The selection unit 112 can be configured to construct one or more coefficient vectors from the decomposed problem statement parameters as the extension parameters.

It can be understood that, the form of the coefficient vectors is amplitudes obtained by decomposition by basis functions. Moreover, the coefficient vectors are obtained by decomposing the problem statement parameters. So the coefficient vectors could show how well exact the function approximates the original function. For example, if the solution is to be generalized via different problem statement parameters, then several coefficient vectors can be obtained, which looks like [(x, ai), (x, a2), ..., (x, a n )], where x = xi, X2, ..., x n - collocation points, a n = a n i, a n 2, ..., a nm - coefficients vectors, and this is not limited thereto.

For example, the selection unit 112 can construct a set of coefficient vectors (for instance, (Pi, P2, ... PN)) corresponding to the basis functions for different problem statments (which is used to parametrize the model), and construct collocation points (xi, ti; X2, t2; . ..; XN, IN) for each vector.

The generation unit 113 is configured to extend the NN input with the one or more coefficient vectors.

It should be noted that, the neural network needs to be re-trained when the problem statement changes usually, and the re-training process may take a lot of time. In the present disclosure, the constructed coefficient vectors, as a part of the neural network input, can avoid the re-training process or reduce the time of the re-training process.

Furthermore, it should be noted that the basic functions, such as, but not limited to, trigonometrical functions, Chebyshev polynomial functions, can be used to decompose the problem statement parameters.

For example, the Fourier transform, allows one to take essentially arbitrary (complex-valued) functions which in this case correspond to the problem statement parameters (like initial conditions, boundary conditions, source, geometry) of the model, and decomposes functions depending on space or time into functions depending on spatial frequencies or temporal frequencies. That helps to approximate the problem statement with a sequence of trigonometric functions with different frequencies and amplitudes. The corresponding interpolation polynomial minimizes the Runge phenomenon and provides the best consistent approximation of the polynomial over continuous functions.

As an example of the first way applied to the decomposition unit 111, FIG. 4 illustrates an example of the parameter generator 110 applied in the first way.

For example, the famous Korteweg-De Vries equation from hydroacoustics is used to verify the system for neural network processing provided in an embodiment of the present disclosure. The Korteweg-De Vries equation with hydroacoustics can be expressed as: where the function u is surface elevation that can be used to represent the cnoidal waves on the water surface, where water depth is h, and g represents the value of Earth's gravity.

The problem statement parameters (initial conditions, boundary conditions and a source) can be expressed as: source-. 0 where cn - Jacobi elliptic function, uo - trough level, A - width parameter, m - elliptic parameter, c - phase speed, and H - wave height

The problem statement parameters are decomposed by Fourier transform into: sin(27tf x ix), cos(2nfxix), ..., sin(27tftit), sin(27tftit), ... Therefore, the input of the NN concludes the extension parameters deposed by the Fourier transform.

Calculation proves that the gain in accuracy can be up to 35x in the first way, compared to the way without extension parameters as NN input. Moreover, the first way improves the accuracy without increasing the computation time.

As an example of the second way applied to the decomposition unit 111, FIG. 5 illustrates an example of the parameter generator 110 applied in the second way.

For another example, the famous Korteweg-De Vries equation with soliton solution is used to verify the system for neural network processing provided in an embodiment of the present disclosure. The Korteweg-De Vries equation with soliton solution can be expressed as: du d 3 u du

— + — 6u— = 0 dt dx 3 dx where the problem statement parameters (initial conditions, boundary conditions and a source) can be expressed as: source: 0 where a, b - independent parameters that affect the appearance of boundary and initial conditions on which the problem should be parameterized.

The set of problem statement parameters (for different a, b) are decomposed by the polynomial approximation into: ao, aix, a2X2,..., aNXN. Where the coefficient vectors ao, ai, a2,...,aN are the extension parameters. So one input of NN is [x, t, aoi, an,..., am], another one is [x, t, ao m , aim,. • ., aNm]. Where x, t can be the same in all inputs. And NN is trained on this whole big dataset.

Calculation proves that the innovation gives a gain in training speed to the desired accuracy (an improvement up to 1 OOx times), better accuracy than the original approach (this application gives up to 3x better accuracy for RMSE and 2.7x better accuracy for Relative error).

The device embodiments are described above, and the method embodiment will be described below. It should be understood that the description of the method embodiments corresponds to the description of the device embodiments. Therefore, for the content that is not described in detail, reference may be made to the above device embodiments, which will not be repeated redundantly here for brevity.

FIG. 6 illustrates a process flow diagram of a method for neural network processing in accordance with embodiments of the disclosure. The method is applied to the system 100 for neural network processing, the method is applied to the system 100 for neural network processing provided in the above embodiments, the system 100 includes a parameter generator and a NN, and the method includes the following steps:

S610, obtaining extension parameters according to problem statement parameters of a model;

S620, obtaining a first set of input parameters by applying the extension parameters to collocation points of the model; and

S630, running the first set of input parameters to obtain a solution of a differential equation corresponding to the model.

Optionally, the differential equation is a partial differential equation, which can be used to describe the model, such as, but not limited to, a vehicle, a bridge, fluid, or any other object that can be evaluated for various characteristics thereof.

Optionally, the obtaining the extension parameters includes: decomposing the problem statement parameters using basis functions to obtain the extension parameters.

Optionally, the obtaining the extension parameters includes: selecting one or more functions from the decomposed problem statement parameters as the extension parameters.

Optionally, the obtaining the extension parameters includes: constructing one or more coefficient vectors from the decomposed problem statement parameters as the extension parameters.

Optionally, the problem statement parameters include at least one non-zero parameter of the following parameters related to the model: boundary conditions, initial conditions, a source, or geometry.

Optionally, the method further includes: after running the first set of input parameters, adjusting, the extension parameters; obtaining a second set of input parameters by applying the adjusted extension parameters to the collocation points; and running the second set of input parameters to obtain the solution of the differential equation corresponding to the model.

It should be understood that, the input of the neural network is not only the collocation points, but a set of input parameters which is obtained by applying extension parameters to the collocation points, and has a higher dimension than the collocation points. The extension parameters are related to the problem statement parameters of the model. The extension parameters with model characteristics are used to obtain the set of input parameters when the neural network starts to run, and thus the efficiency of solving differential equations can be improved.

In the embodiment shown in FIG. 6, the parameter generator is the same as the parameter generator 110 in the system 100 in the above embodiments, the NN is the same as the NN 120 in the system 100 in the above embodiments, and reference is made to the above description for the relevant contents, which will not be repeated redundantly here. FIG.7 illustrates a process flow diagram of a method for neural network processing in accordance with embodiments of the disclosure. The method is applied to the system 100 for neural network processing, the method is applied to the system 100 for neural network processing provided in the above embodiments, the system 100 includes a parameter generator and a NN, and the method includes the following steps.

S710, problem statement parameters related to a model are obtained by a parameter generator.

S720, the problem statement parameters are decomposed using basis functions by the parameter generator.

S730, one or more functions are selected by the parameter generator as extension parameters.

S740, collocation points of the model are obtained by the parameter generator.

S750, a first set of input parameters is obtained by applying the extension parameters to the collocation points of the model.

S760, the first set of input parameters is ran by a NN to obtain the solution of a differential equation corresponding to the model.

Optionally, S770, the NN determines whether additional fine-tuning is required according to the solution obtained in S760. For example, if the accuracy of the solution satisfies a preset condition, the additional fine-tuning is not required. If the accuracy of the solution does not satisfy the preset condition, the additional fine-tuning is required. Moreover, if the additional fine-tuning is required, the process returns to S720, that is, transferring the preliminary solution to the parameter generator.

For example, the NN determines the additional fine-tuning is required, then the parameter generator may regenerate the input parameters, which is a second set of input parameters obtained by applying adjusted extension parameters to the collocation points by the parameter generator, and the second set of input parameters is ran by the NN to obtain the solution of the differential equation corresponding to the model.

It should be noted that S720 to S770 can be performed multiple times. The parameter generator could adjust the extension parameters several times, that is, the neural network could run the adjusted set of input parameters several times during training. There is no limitation on the number of times the parameter generator adjusts the extension parameters.

It should be noted that, the accuracy of the solution strongly depends on the formulation of the model. In most cases, the model should be handcrafted on a case-by-case basis to achieve acceptable accuracy, which makes the physics informed neural network not very scalable. In the present disclosure, the selected one or more functions, as a part of the neural network input, can reduce the need for handcraft situation, and thus the system can achieve acceptable accuracy for different cases.

In the embodiment shown in FIG. 7, the parameter generator is the same as the parameter generator 110 in the system 100 in the above embodiments, the NN is the same as the NN 120 in the system 100 in the above embodiments, and reference is made to the above description for the relevant contents, which will not be repeated redundantly here.

FIG.8 illustrates a process flow diagram of a method for neural network processing in accordance with embodiments of the disclosure. The method is applied to the system 100 for neural network processing, the method is applied to the system 100 for neural network processing provided in the above embodiments, the system 100 includes a parameter generator and a NN, and the method includes the following steps.

S810, problem statement parameters related to a model are sended to the parameter generator.

S820, the problem statement are decomposed using basis functions by the parameter generator.

S830, one or more coefficient vectors are constructed by the parameter generator as extension parameters.

S840, collocation points of the model are obtained by the parameter generator.

S850, a first set of input parameters is obtained by applying the extension parameters to the collocation points of the model.

S860, the first set of input parameters is ran by a NN to obtain a solution of a differential equation corresponding to the model.

Optionally, S870, the NN determines whether additional fine-tuning is required according to the solution obtained in S860. If the additional fine-tuning is required, the process returns to S82.0, that is, transferring the preliminary solution to the parameter generator.

Similarly, the S820 to S870 can be performed multiple times, which will not be repeated redundantly here.

In the embodiment shown in FIG. 8, the parameter generator is the same as the parameter generator 110 in the system 100 in the above embodiments, the NN is the same as the NN 120 in the system 100 in the above embodiments, and reference is made to the above description for the relevant contents, which will not be repeated redundantly here.

The terms used in the present application are merely used to describe the embodiments and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term “and/or” used in the present application indicates and includes any or all possible combinations of one or more associated listed items. In addition, the terms “comprise” (include) and its variations “comprises” (includes) and/or “comprising” (including), when used in the present application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

The various aspects, embodiments, implementations or features in the described embodiments can be used separately or in any combination. Various aspects in the described embodiments may be implemented by software, hardware, or a combination of software and hardware. The described embodiments may also be embodied by a computer-readable medium having stored thereon computer-readable code including instructions executable by at least one computing apparatus. The computer-readable medium may be associated with any data storage apparatus that can store data which can be read by a computer system. Examples of the computer readable medium may include a read-only memory, a random-access memory, CDROMs, HDDs, DVDs, magnetic tape, and optical data storage apparatuses. The computer-readable medium can also be distributed in network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion.

The foregoing technical description may be made with reference to the accompanying drawings, which form a part of the present application, and in which, by way of description, implementations in accordance with the described embodiments are shown. Although these embodiments are described in sufficient detail to enable one skilled in the art to implement these embodiments, these embodiments are not limiting; such that other embodiments may be used, and changes may be made, without departing from the scope of the described embodiments. For example, the order of operations described in the flowcharts is not limiting, and thus the order of two or more operations illustrated in the flowcharts and described in accordance with the flowcharts may vary according to several embodiments. As another example, in several embodiments, one or more operations illustrated in the flowcharts and described in accordance with the flowcharts are optional, or may be deleted. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of two or more steps permuted. All such changes are considered to be included in the disclosed embodiments and claims.

Additionally, the terms used in the above technical description are used to provide a thorough understanding of the described embodiments. However, excessive details are not required to implement the described embodiments. Thus, the foregoing description of the embodiments are presented for purposes of illustration and description. The embodiments presented in the foregoing description and the examples disclosed in accordance with these embodiments, are provided separately to add context and to facilitate understanding of the described embodiments. The above description is not intended to be exhaustive or to limit the described embodiments to the precise form of the disclosure. Several modifications, options, and variations are possible in light of the above teachings. In some instances, well known process steps have not been described in detail in order to avoid unnecessarily affecting the described embodiments.