Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR SIGNAL PROCESSING WITH NEURAL NETWORKS
Document Type and Number:
WIPO Patent Application WO/2020/069895
Kind Code:
A1
Abstract:
Embodiments relate to an apparatus for processing a received radio signal, the apparatus (1) comprising means (5, 6, P; 41, 42) configured for: - processing (S1) received radio signal data with a first signal processing chain (41), wherein said first processing chain (41) comprises at least one first processing module (44, 45) configured for determining first output data (y1, z1) based on said received radio signal data, - processing (S1) said received radio signal data with a second signal processing chain (42), wherein said second processing chain (42) comprises at least one second processing module (47, 49) configured for determining an estimation (y2, z2) of said first output data (y1, z1) based on said received radio signal data, and at least one neural network (48, 50) configured for determining second output data (y3, z3) based on said estimation (y2, z2), - updating (S2) parameters of said at least one neural network (48, 50) based on a comparison between said first output data (y1, z1) and said second output data (y3, z3), - after said updating (S2), processing (S4) said received radio signal data with said second signal processing chain (42), without applying said at least one first processing module (44, 45).

Inventors:
GOMONY MANIL DEV (BE)
MURUGAPPA VELAYUTHAN PURUSHOTHAM (BE)
Application Number:
PCT/EP2019/075500
Publication Date:
April 09, 2020
Filing Date:
September 23, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA SOLUTIONS & NETWORKS OY (FI)
International Classes:
H04L25/03; H04L27/26; G06N3/04; H04L25/02; H04L25/06
Other References:
HENGTAO HE ET AL: "Model-Driven Deep Learning for Physical Layer Communications", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 17 September 2018 (2018-09-17), XP080917964
GAO XUANXUAN ET AL: "ComNet: Combination of Deep Learning and Expert Knowledge in OFDM Receivers", IEEE COMMUNICATIONS LETTERS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 22, no. 12, 25 October 2018 (2018-10-25), pages 1 - 11, XP011699245, ISSN: 1089-7798, [retrieved on 20181207], DOI: 10.1109/LCOMM.2018.2877965
Attorney, Agent or Firm:
AARNIO, Ari et al. (FI)
Download PDF:
Claims:
CLAIMS

1. Apparatus (1) for processing a received radio signal, the apparatus (1) comprising means (5, 6, P; 41, 42) configured for:

- processing (S1) received radio signal data with a first signal processing chain (41), wherein said first processing chain (41) comprises at least one first processing module (44, 45) configured for determining first output data (yi, zi) based on said received radio signal data,

- processing (S1) said received radio signal data with a second signal processing chain (42), wherein said second processing chain (42) comprises at least one second processing module (47, 49) configured for determining an estimation (y2, z 2) of said first output data (yi, zi) based on said received radio signal data, and at least one neural network (48, 50) configured for determining second output data (y3, z3) based on said estimation (y2, z2),

- updating (S2) parameters of said at least one neural network (48, 50) based on a comparison between said first output data (yi, zi) and said second output data (y¾ z3),

- after said updating (S2), processing (S4) said received radio signal data with said second signal processing chain (42), without applying said at least one first processing module (44, 45).

2. Apparatus according to claim 1, wherein said at least one first processing module comprises a first channel estimation module (44) and said at least one second processing module comprises a second channel estimation module (47).

3. Apparatus according to claim 2, wherein said means are further configured for updating parameters of said at least one neural network based on a demodulation error of the output of the second channel estimation module (47), during said processing (S4) of said received radio signal data with said second signal processing chain (42) without applying said at least one first processing module (44, 45, 46).

4. Apparatus according to any one of claims 1 to 3, wherein said at least one first processing module comprises a first demodulation module (45) and said at least one second processing module comprises a second demodulation module (49).

5. Apparatus according to claim 4, wherein said means are further configured for updating parameters of said at least one neural network based on a decoding error of the output of the second demodulation module (50), during said processing (S4) of said received radio signal data with said second signal processing chain (42) without applying said at least one first processing module (44, 45, 46).

6. Apparatus according to any one of claims 1 to 5, wherein processing said received radio signal data with the first signal processing chain (41) comprises determining a first codeword (CW1) and processing said received radio signal data with the second signal processing chain (42) comprises determining a second codeword (CW2), wherein said means are further configured for repeating said processing (S1) and said updating (S2) until a test (S3) determines that the first and second codewords are equals or that the bit error rates of decoding the first and second codewords are equals.

7. Apparatus according to any one of claims 1 to 6, wherein said means include at least one processor (5) and at least one memory (6), the at least one memory storing computer program code (P), the at least one memory and the computer program code being configured to, with the at least one processor, cause the apparatus to at least in part perform said processing (S1, S4) and updating (S2).

8. Method for processing signal data, executed by an apparatus, comprising:

- processing (S1) received radio signal data with a first signal processing chain (41), wherein said first processing chain (41) comprises at least one first processing module (44, 45) configured for determining first output data (yi, zi) based on said received radio signal data,

- processing (S1) said received radio signal data with a second signal processing chain (42), wherein said second processing chain (42) comprises at least one second processing module (47, 49) configured for determining an estimation (y2, z 2) of said first output data (yi, zi) based on said received radio signal data, and at least one neural network (48, 50) configured for determining second output data (y3, z3) based on said estimation (y2, z2),

- updating (S2) parameters of said at least one neural network (48, 50) based on a comparison between said first output data (yi, zi) and said second output data (y¾ z3),

- after said updating (S2), processing (S4) said received radio signal data with said second signal processing chain (42), without applying said at least one first processing module (44, 45).

9. Computer program (P) comprising instructions for performing the method of claim 8 when said instructions are executed by a computer.

Description:
METHOD AND APPARATUS FOR SIGNAL PROCESSING WITH NEURAL NETWORKS

FIELD OF THE INVENTION

The present invention relates to the field of signal processing. In particular, the present invention relates to a method and an apparatus for processing received radio signal.

BACKGROUND

A wireless baseband processing receiver needs to perform Channel estimation, Demodulation and Turbo/LDPC/Polar decoding efficiently to retrieve the information correctly. Channel detection is a quite complex and time critical operation that involves equalization and noise compensation which relies on efficient Parameter estimation algorithms. For instance, a channel estimation algorithm based on well-known MMSE, zero forcing uses high dimensional matrix inversion. Often these inverse matrices should be of high precision requiring closed loop tracking of varying channel conditions. Similarly, the Demodulation involves computationally intensive operations to compute the log-likelihood ratio (LLR) for the Turbo/LDPC/Polar decoder. Due to the complexity of the Channel estimation and Demodulation algorithms, hardware realization of such functions may result in high power consumption and long latency.

SUMMARY

It is thus an object of embodiments of the present invention to propose a method and an apparatus for signal processing, which do not show the inherent shortcomings of the prior art. Accordingly, embodiments relate to an apparatus for processing a received radio signal, the apparatus comprising means configured for:

- processing received radio signal data with a first signal processing chain, wherein said first processing chain comprises at least one first processing module configured for determining first output data based on said received radio signal data,

- processing said received radio signal data with a second signal processing chain, wherein said second processing chain comprises at least one second processing module configured for determining an estimation of said first output data based on said received radio signal data, and at least one neural network configured for determining second output data based on said estimation,

- updating parameters of said at least one neural network based on a comparison between said first output data and said second output data,

- after said updating, processing said received radio signal data with said second signal processing chain, without applying said at least one first processing module.

In some embodiments, the at least one first processing module comprises a first channel estimation module and said at least one second processing module comprises a second channel estimation module.

Said means may be further configured for updating parameters of said at least one neural network based on a demodulation error of the output of the second channel estimation module, during said processing of said received radio signal data with said second signal processing chain without applying said at least one first processing module.

In some embodiments, the at least one first processing module comprises a first demodulation module and the at least one second processing module comprises a second demodulation module. In some embodiments, said means are further configured for updating parameters of said at least one neural network based on a decoding error of the output of the second demodulation module, during said processing of said received radio signal data with said second signal processing chain without applying said at least one first processing module.

In some embodiments, processing said received radio signal data with the first signal processing chain comprises determining a first codeword and processing said received radio signal data with the second signal processing chain comprises determining a second codeword, wherein said means are further configured for repeating said processing and said updating until a test determines that the first and second codewords are equals or that the bit error rates of decoding the first and second codewords are equals.

In some embodiments, said means include at least one processor and at least one memory, the at least one memory storing computer program code, the at least one memory and the computer program code being configured to, with the at least one processor, cause the apparatus to at least in part perform the functions discussed above.

Embodiments also relate to a Method for processing signal data, executed by an apparatus, comprising:

- processing received radio signal data with a first signal processing chain, wherein said first processing chain comprises at least one first processing module configured for determining first output data based on said received radio signal data,

- processing said received radio signal data with a second signal processing chain, wherein said second processing chain comprises at least one second processing module configured for determining an estimation of said first output data based on said received radio signal data, and at least one neural network configured for determining second output data based on said estimation,

- updating parameters of said at least one neural network based on a comparison between said first output data and said second output data,

- after said updating, processing said received radio signal data with said second signal processing chain, without applying said at least one first processing module.

Embodiments also relate to a computer program comprising instructions for performing the method mentioned before when said instructions are executed by a computer. The computer program may be stored on a computer readable medium. The computer readable medium may be a non- transitory computer readable medium.

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects and features of the invention will become more apparent and the invention itself will be best understood by referring to the following description of embodiments taken in conjunction with the accompanying drawings wherein:

Figure 1 is a block diagram of a wireless communication device,

Figure 2 is a functional view of the baseband processing of the wireless communication device of Figure 1,

Figure 3 is a flowchart of a method for signal processing, and Figure 4 is a structural view of the wireless communication device.

DESCRIPTION OF EMBODIMENTS

Figure 1 is a block diagram of a wireless communication device 1. The wireless communication device 1 comprises at least one antenna 2, a radio- frequency module 3 and a baseband processor 4. The wireless communication device 1 may be a wireless receiver/transmitter or a wireless receiver.

The radio-frequency module 3 converts, by frequency shifting, filtering and analog to digital conversion, a radio signal r(t) captured by the antenna 2 into received radio signal data RS. The received radio signal data RS comprises for example successive received samples, which are digital vectors representing a determined part of the radio signal, for example an appropriate part of an OFDM signal.

The baseband processor 4 converts received radio signal data RS into successive detected codewords CW. Detailed functioning of the baseband processor 4 will be described hereafter with reference to Figures 2 and 3.

The wireless communication device 1 may comprises other elements not shown of Figure 1, e.g. for wireless transmitter, user interface, data storage and processing... The wireless communication device 1 may be a user device, e.g. a smart phone, or a network element, e.g. a wireless access point or a cellular network base station... The radio-frequency module 3 and the baseband processor 4 may be located in a common housing, or located in different housing and connected by an appropriate link. The wireless communication device 1 may use a wireless communication standard for communication, e.g. IEEE Wi-Fi, 3GPP 3G, 4G, 5G...

Figure 2 is a functional block diagram of the baseband processor 4. The baseband processor 4 comprises two processing chains: a signal processing chain 41 and a signal processing chain 42.

The signal processing chain 41 converts received radio signal data RS into detected codewords CW1. The signal processing chain 41 comprises a Fast Fourier Transform module 43, a Channel Estimation module 44, a Demodulation module 45 and a decoder module 46.

The Fast Fourier Transform module 43 converts the received radio signal data RS into the frequency domain and outputs successive vectors xi. The channel estimation module 44 performs channel estimation based on the vectors xi, and output successive vectors yi which are the corrected version of the vectors xi using its estimated channel state information (CSI), e.g. gain, phase, timing information. The Demodulation module 45 performs soft demodulation, e.g. determination of a soft value call Log Likelihood Ratio (LLR) for each received bit. The Demodulation module output successive vectors zi of LLR values. The decoder module 46 determines successive codewords CW1 based on the vectors zi, by applying a decoding technique such as LDPC/Turbo/Polar... decoding.

The signal processing chain 42 converts received radio signal data RS into detected codewords CW2. The signal processing chain 42 comprises the Fast Fourier Transform module 43 (which is common to both processing chains), a Channel Estimation module 47, a first neural network 48, a Demodulation module 59, a second neural network 50 and a decoder module 51.

The channel estimation module 47 performs channel estimation based on the vectors xi, and output successive vectors y2 which are the corrected version of the vector xi based on its estimated channel state information (CSI), e.g. gain, phase, timing information. When comparing the channel estimation modules 44 and 47, the channel estimation module 44 uses a more accurate estimation algorithm than the channel estimation module 47. In other words, a vector y2 may be seen as an estimation or approximation of a corresponding vector yi. Consequently, the channel estimation module 44 consumes more processing resources than the channel estimation module 47. The neural network 48, once trained, determines a vector y3 based on a vector xi and corresponding vector y2. More specifically, the trained neural network 48 aims at correcting the vector y2 to match the corresponding vector yi. The neural network 48 is for example a feedforward deep neural network. Training of the neural network 48 is described in more details hereafter. Accordingly, together, the channel estimation module 47 and the trained neural network 48 are capable to output a vector y3 based on a vector xi, which match a corresponding vector yi, but consume less processing resources and may efficiently be implemented on a more general-purpose processor than the channel estimation module 44.

The Demodulation module 49 performs soft demodulation, e.g. determination of a soft value call Log Likelihood Ratio (LLR) for each received bit. The Demodulation module 49 output successive vectors Z2 of LLR values based on corresponding vectors y3. When comparing the Demodulation modules 45 and 49, the Demodulation module 45 uses a more accurate estimation algorithm than the Demodulation module 49. In other words, a vector Z2 may be seen as an estimation or approximation of a corresponding vector zi. Consequently, the Demodulation module 45 consumes more processing resources than the Demodulation module 49. The neural network 50, once trained, determines a vector Z3 based on a vector y3 and corresponding vector Z2. More specifically, the neural network 50 aims at correcting a vector Z2 to match the corresponding vector zi. The neural network 50 is for example a feedforward deep neural network. Training of the neural network 50 is described in more details hereafter. Accordingly, together, the Demodulation module 49 and the trained neural network 50 are capable to output a vector Z3 based on a vector y3, which match a corresponding vector zi, but consume less processing resources than the Demodulation module 45.

The decoder module 51 determines successive codewords CW2 based on the vectors Z3, by performing LDPC/Turbo/Polar/any other channel decoding techniques. In some embodiment, the decoder module 51 may apply a neural network.

When comparing the processing chains 41 and 42, the processing chain 41 may be seen as a « classical » receiver, which rely on high accuracy algorithms for channel estimation and demodulation, but requires a higher level of processing resources. In contrast, the processing chain 42 may be seen as a « neural network- based » receiver, which rely on lower accuracy algorithms for channel estimation and demodulation, and thus requires a lower level of processing resources, and involves trained neural networks to compensate the use of lower accuracy algorithm. In some embodiments, this concept of using a lower accuracy algorithm combined with a trained neural network is applied to the channel estimation only, or to the demodulation only or channel decoding only. Figure 3 is a flowchart of a method for signal processing, illustrating the functioning of the baseband processor 4. The functioning involves a training phase (steps S1 to S3) and an inference phase (Step S4). The method of Figure 3 is executed by the wireless communication device 1, in particular by the baseband processor 4.

In the training phase, both processing chains 41 and 42 process received radio signal data RS in parallel (Step S1). Accordingly, the processing chain 41 determines codewords CW1 based on received radio signal data RS, and this involves determining the intermediate vectors xi, yi and zi. In parallel, the processing chain 42 determines codewords CW2 based on received radio signal data RS, and this involves determining the intermediate vectors xi, y 2 , y 3 , Z 2 and Z 3. This is repeated iteratively for successive received samples of the received radio signal data RS and codewords CW1, CW2. At each iteration of the training phase, the baseband processor 4 updates the parameters of the neural network 48 based on a comparison between the vectors y3 and yi, and the parameters of the neural network 50 based on a comparison between the vectors Z3 and zi (step S2). For example, for given values of the vectors xi, yi and y2, weights of the neural network 47 are updated by applying the SGD algorithm, taking the mean square error between vectors y3 and yi as loss function. Similarly, for given values of the vectors y3, zi and Z2, weights of the neural network 50 are updated by applying the SGD algorithm, taking the mean square error between vectors Z3 and zi as loss function. In embodiments wherein the decoder module 51 uses a neural network, training is also performed for this neural network.

Steps S1 and S2 are repeated until a completion test is satisfied (step S3). The completion test may be for example: Both processing chains 41 and 42 give the same output, i.e. CW1=CW2, or the bit error rate of decoding CW1 and CW2 are equals.

In the inference phase, only the processing chain 42 processes received radio signal data RS (Step S4). Accordingly, the processing chain 42 determines codewords CW2 based on received samples, and this involves determining the intermediate vectors xi, y2, y3, Z2 and Z3. In the inference phase, the processing chain 41 is not active. Accordingly, the Channel Estimation module 44, the Demodulation module 45 and the decoder module 46 do not process signal data, and no codeword CW1 or intermediate vectors yi and zi is determined.

In the wireless communication device 1, in the inference phase, codewords CW2 may be determined based on received samples of the received radio signal data RS with the use of simple algorithms for channel estimation and/or demodulation combined with trained neural networks. This requires less processing resources (e.g. less energy, time, memory, processor cycles...) than the use of more complex algorithms for channel estimation and/or demodulation. Moreover, in the training phase, the processing chain 41 allows to obtain data for training the neural networks without the need to rely on separate input of training data or to obtain trained neural network from an external source.

We now describe examples of channel estimation techniques that may be used by the channel estimation modules 44 and 47, for example in the case of an OFDM receiver.

The channel estimation problem involves estimating the channel distortion based on known pilot symbols introduced at known carriers in the transmitted OFDM symbol. Two well-known techniques are zero forcing and Minimum Mean Square Error (MMSE).

Assuming P pilots are inserted at k th location in the transmitted sequence, p(k) is the transmitted pilot sequence and h(n) is the channel, the received pilot symbol xi(k) can be expressed as the received pilot sequence k = 0 ... P - 1 = location of pilots in the received sequence

The job of the channel estimation is to find the correct h(n) based on the received pilots.

The Zero forcing solution is given by: h = Pi -1 xi: This solution is simple as it involves inverse of a pilot symbols, but it is noise prone because at low SNR region it tends to amplify received noise. The MMSE solution is given by:

p T p : This solution is better but more complex than the simple zero-forcing equalization as this involves matrix multiplication and inverse operations.

Accordingly, in some embodiments, the channel estimation module 44 applies the MMSE solution and the channel estimation module 47 applies the Zero forcing solution. Channel state matrix h(n) thus computed is used to determine y2, as y2 = h(n)*xi, where * is the multiplication operation.

For training the neural network 48, as use case, let's assume zero forcing equalizer for channel equalization module 47 with input xi and output y2 (as shown in Figure 1). The channel estimation module 44 (MMSE or better algorithm) provides the desired output y- \. During initial training phase, the neural network 48 is trained to output y3 such that the mean square error (MSE) between vectors y- \ and y3 is zero: MSE(y 1 ,y 3 ) = 0, given x lt y 1 , y 2

We now describe example of demodulation techniques applied by the demodulation modules 45 and 49.

To enable optimum performance at the decoding stage, soft demodulation is normally proposed. Soft demodulation involves computation of soft value, called Log-Likelihood Ratio (LLR) for each received bit. The LLR (L m ) for the m-th bit b m of a received symbol § = y 1 can be given as

L i l m = log

Assuming a distortion by Gaussian noise, and the noise on real and imaginary part is independent and identically distributed Gaussian variable then the above formula becomes:

Where SNR, is the Signal-to-Noise Ratio on symbol i.

The above formula is computationally complex and expensive to implement in hardware as it involves logarithmic, division, exponent, and square operations. Hence, the max-log-MAP approximated version is proposed:

The above formula needs to calculate the distance between and all constellation points, and the computation becomes complex for larger constellations, i.e. 64QAM and 256QAM. A less computationally-intensive algorithm is proposed. For 16-QAM, the proposed formula is

Where,

A much-simplified formula is proposed in the working 5G standards. The simplified formula for 16QAM is given below:

L 0 = SNR 2d Re(s)

L x = SNR 2d Im(s)

2 = SNR - 2d - (d - |Re(s) |)

L 3 = SNR 2d (d— |lm(s) |) (4)

Using the different simplified formulae above for computing the LLR reduces the hardware complexity but comes at the cost of performance degradation. The proposed solution in the wireless communication device 1 compensates the loss of performance degradation by introducing a neural network 50 after the demodulation module 49.

The inputs to the demodulator is the received frequency samples s (=yi or y3) after the channel estimation phase and the SNR of the channel (which is an external input). In some embodiments, the demodulation module 45 uses the demodulation algorithm (1) listed above and the demodulation module 49 uses one of the approximate demodulation algorithms (2) -(4).

During the training phase, the neural network 50 is trained until there is zero difference in the bit error rate between the Decoded CW1 and CW2.

In some embodiment, the wireless communication device 1 also comprises a weights update module 52 and/or a weights update module 53. More specifically, the signal processing chain 42 comprises the weights update module 52 and/or the weights update module 53.

During the inference phase, the neural network 48 is on-the-fly fine- tuned iteratively by the weights update module 52 to output y3 such that: demodulation error of y 3 = 0, Given x lt y 2 , y 4 Similarly, during the inference phase, the neural network 50 is on-the- fly fine-tuned iteratively by the weights update module 53 to output z 3 such based on the bit error rate of CW2.

Figure 4 is a structural view of the wireless communication device 1, according to some embodiments. More specifically, in some embodiment, the baseband processor 4 comprises at least one processor 5 and at least one memory 6. The at least one memory 6 stores computer program code P. The at least one memory 6 and the computer program code P are configured to, with the at least one processor 5, cause the wireless communication device 1 to at least in part perform the method of Figure 3.

It should be noted that although examples of methods have been described with a specific order of steps, this does not exclude other implementations. In particular, the described steps may be executed in another order, partially or totally in parallel...

It is to be remarked that the functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared, for example in a cloud computing architecture. Moreover, explicit use of the term "processor" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.

It should be further appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

While the principles of the invention have been described above in connection with specific embodiments, it is to be clearly understood that this description is made only by way of example and not as a limitation on the scope of the invention, as defined in the appended claims.