Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONTINUOUS-FLOW CONFLICT-FREE MIXED-RADIX FAST FOURIER TRANSFORM IN MULTI-BANK MEMORY
Document Type and Number:
WIPO Patent Application WO/2014/108718
Kind Code:
A1
Abstract:
A method and a processor to perform continuous-flow conflict-free mixed-radix FFT for data in a memory are provided. Multiple butterfly calculations of small radix are launched generally in parallel in mixed-radix FFT using conflict-free address generation with a memory. The multiple butterfly calculations of data entries may be staged in a processor, such that the memory read and write operations may be executed continuously without access conflicts.

Inventors:
SALISHEV SERGEY I (RU)
Application Number:
PCT/IB2013/000446
Publication Date:
July 17, 2014
Filing Date:
January 09, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTEL CORP (US)
International Classes:
G06F17/14
Foreign References:
EP1269346B12007-10-31
US20070288542A12007-12-13
Download PDF:
Claims:
CLAIMS

1. A method of processing data, comprising:

generating, by an address generator, a plurality of addresses and a traversal order corresponding to the data according to a plurality of mixed-radix settings;

reading, by an interface, the data from a memory to the processor according to the plurality of the addresses in the traversal order; and

processing, by a processor, the data of more than one butterfly operations of a Fast Fourier Transform (FFT), prior to the interface writing the processed data to the memory, with a throughput of one butterfly per clock.

2. The method of claim 1, wherein the FFT is radix-r R, R=r*q, and q is greater than one.

3. The method of claim 2, wherein the memory is a dual-port memory, and the interface reads the data from and writes the processed data to the memory using two different memory ports in a single clock period.

4. The method of claim 2, wherein the memory is a single-port memory, clocked at the same frequency as the processor.

5. The method of claim 2, further comprising performing self-sorting on half of the butterfly operations.

6. The method of claim 2, wherein the processor processes the data of more than one radix- R butterfly operations, prior to the interface writing the processed data to the memory.

7. The method of claim 6, wherein the processor processes the data of R number of radix-R butterfly operations, prior to the interface writing the processed data of R number of radix-R butterfly operations to the memory.

8. The method of claim 6, wherein the processor launches processing of the data of more than one radix-r butterfly operations concurrently instead of one radix-R butterfly operation.

9. The method of claim 6, wherein the processor processes the data of more than one radix-r butterfly operations in parallel.

10. The method of claim 6, wherein the processor processes the data of more than one radix- R butterfly operations in pipeline.

11. A processing device, comprising:

an address generator to generate a plurality of addresses and a traversal order

corresponding to data according to a plurality of mixed-radix settings;

a plurality of interfaces to read the data from a memory to the processor according to the plurality of the addresses in the traversal order; and

a processor to process the data of more than one butterfly operations of a Fast Fourier Transform (FFT), prior to the interfaces writing the processed data to the memory, with a throughput of one butterfly per clock.

12. The processing device of claim 11, wherein the FFT is radix-r/R, R=r*q, and q is greater than one.

13. The processing device of claim 12, wherein the memory is a dual-port memory, and the interface is to read the data from and write the processed data to the memory using two different memory ports in a single clock period.

14. The processing device of claim 12, wherein the memory is a single-port memory, clocked at the same frequency as the processor.

The processing device of claim 12, wherein the processing device performs self-sorting on half of the butterfly operations.

15. The processing device of claim 12, wherein the processor is to process the data of more than one radix-R butterfly operations, prior to the interface writing the processed data to the memory.

16. The processing device of claim 16, wherein the processor is to process the data of R number of radix-R butterfly operations, prior to the interface writing the processed data of R number of radix-R butterfly operations to the memory.

17. The processing device of claim 16, wherein the processor is to launch processing of the data of more than one radix-r butterfly operations concurrently instead of one radix-R butterfly operation.

18. The processing device of claim 16, wherein the processor is to process the data of more than one radix-r butterfly operations in parallel.

19. The processing device of claim 16, wherein the processor is to process the data of more than one radix-R butterfly operations in pipeline.

20. A system, comprising:

a memory;

an address generator to generate a plurality of addresses and a traversal order corresponding to data according to a plurality of mixed-radix settings;

a plurality of interfaces to read the data from the memory to the processor according to the plurality of the addresses in the traversal order; and

a processor to process the data of more than one butterfly operations of a Fast Fourier Transform (FFT), prior to the interfaces writing the processed data to the memory, with a throughput of one butterfly per clock.

Description:
CONTINUOUS-FLOW CONFLICT-FREE MIXED-RADIX FAST FOURIER

TRANSFORM IN MULTI-BANK MEMORY

FIELD OF THE INVENTION

[0001] The present disclosure relates to continuous-flow conflict-free mixed-radix fast Fourier transform (FFT) in multi-bank memory, and in particular to methods of performing FFT by launching multiple butterfly stage operations simultaneously using multiple memory banks to maximize use of memory space during mixed-radix FFT, in order to reduce circuit space, clock, and power requirements.

DESCRIPTION OF RELATED ART

[0002] Digital signal processing tasks may be performed by a Digital Signal Processor (DSP) in various types of applications, such as communications, video and audio processing, financial analysis, biological data analysis, and environmental sciences. A DSP may be a specialized microprocessor. FFT operations may be used to process signals in time or frequency domains in such applications. FFT operations may include Decimation in Time (DIT) and Decimation in Frequency (DIF) decomposition operations.

[0003] FFT operations may be performed on data entries stored in a memory. The DSP may perform multiple stages of multiply-accumulate operations and data transposition operations on the data entries. These stages are sometimes called "butterflies". Each butterfly may have a base- size (radix). For example, a FFT using butterflies of base-2 may be a radix-2 FFT. A FFT having butterflies of two different base sizes may be called "mixed radix" FFT.

[0004] FFT operations may be implemented on a software level in the DSP, or using specialized hardware architecture in the DSP. Performance of the DSP in the various applications depends on the performance of the FFT operations, which may depend on various factors. For example, data processed through the FFT operations may typically be stored in memory during processing. Thus, the memory space required and the timing of memory read and write operations may impact the overall performance and cost of the DSP.

[0005] Thus, there is a continual need to perform FFT operations with minimal hardware space, power, and timing requirements, and the fastest data processing speed.

DESCRIPTION OF THE FIGURES

[0006] Embodiments are illustrated by way of example and not limitation in the Figures of the accompanying drawings:

[0007] FIG. 1 illustrates an exemplary method of processing data according to an embodiment of the present disclosure.

[0008] FIG. 2 illustrates an exemplary processing device according to an embodiment of the present disclosure.

[0009] FIG. 3 illustrates an exemplary processing device according to an embodiment of the present disclosure.

[0010] FIG. 4 illustrates an exemplary processing device according to an embodiment of the present disclosure.

[0011] FIG. 5 illustrates an exemplary processing device according to an embodiment of the present disclosure.

[0012] FIG. 6 illustrates an exemplary processing device according to an embodiment of the present disclosure.

DETAILED DESCRIPTION

[0013] According to various embodiments of the present disclosure, a method and a processor to perform continuous-flow conflict-free mixed-radix FFT for data in a memory are provided. Multiple butterfly calculations of small radix are launched generally in parallel in mixed-radix FFT using conflict-free address generation with a memory. The multiple butterfly calculations of data entries may be staged in a processor, such that the memory read and write operations may be executed continuously without access conflicts.

[0014] In this configuration, the mixed-radix FFT operation may be carried out with maximum memory data through-put, minimum wait time, and less costs in memory circuit space and power.

[0015] A dual-port memory architecture may be used. Alternatively, a single-port memory architecture may be provided with an in-place strategy, further reducing port routing circuit space requirement.

[0016] Self-sorting architecture using overlapped operations for data I/O with natural order FFT may increase FFT performance.

[0017] A common approach to FFT processor architecture may be an "in-place" memory- based FFT. Use of this approach guarantees that for each butterfly or group of butterflies both inputs and results are stored in the same memory locations. For example, a FFT of data points sampled at N points may use a memory with N complex words capacity. [0018] One butterfly calculation may be initiated every clock to maximize memory throughput for a given butterfly size. Each wing of the butterfly may be read and written at different memory banks and addresses, using conflict-free bank assignment.

[0019] For a FFT operation of data sampled at N points, where N=r -R" '1 , 2r < R, R is divisible by r, and R, r are radixes of butterflies in the FFT, the FFT calculation may use a radix-i? butterfly operation to calculate multiple radix-r butterflies simultaneously.

[0020] The FFT operation may result in input data and output data having different digit orders. A digit reverse operation may be performed near the beginning or the end of the FFT operation. Self-sorting may eliminate the need for separate a digit reverse operation in the FFT, and may increase the speed of the FFT.

[0021] FIG. 1 illustrates an exemplary method 100 of processing data according to an embodiment of the present disclosure.

[0022] The method 100 begins at block 110, by generating memory addresses and traversal order for data according to mixed-radix settings. The method proceeds to block 120, reading the data from the memory according to the generated memory addresses in the traversal order. The method proceeds to block 130, processing the data of more than one butterfly stage operations of the FFT. The method proceeds to block 140, if self-sorting is needed, performing self-sorting on butterfly stages that need sorting, and apply any delays needed to avoid memory conflict. The method proceeds to block 150, writing the processed data of more than one butterfly stage operations into memory. The method proceeds to block 160, determining if all butterfly stages completed processing. If yes, the method ends at block 170. If no, the method returns to block 120 to read additional data as needed for additional butterfly stage operations.

[0023] FIG. 2 illustrates an exemplary processing device 200 according to an embodiment of the present disclosure.

[0024] According to an embodiment of the present disclosure, a processing device 200 is described.

[0025] The processing device 200 may be connected to a memory 220 storing N entries of complex data points for processing. The processing device 200 may include an address generator unit (AGU) 210 that generates the memory address assignments and a traverse order for the data according to the mixed-radix settings. The AGU 210 is connected to an interface 280, which reads the data from the memory 220 according to the memory address assignments in the traversal order generated by the AGU 210 and writes the processed data back into the memory 220. The AGU 210 is connected to a processor (PU) 240, which processes the data of more than one butterfly stage operations of the FFT, prior to the interface 280 writing the processed data back to the memory 220.

[0026] For a FFT operation of data sampled at N points, where N= ro -rj -r 2 ·... ·/·„.; , where the FFT operation decomposed into radix ro , .... r n -i stages, where r, < r, + j , mixed-radix index numbers may be represented as follows:

[0027] [d]i , i +j = d„ d, + i, .... d, +j , [d] i+Ji , = d t+J , d, +J .i, .... d, .

[0028] If do, .... d s are respectively, ro, .... r s radix digits, then [d s , .... do may be a mixed- radix index number derived by concatenating the digits. If any d, is a radix-1 digit, then [d s , .... d, + i, d„ d,.i, ..., do] = [d s , .... d, + i, d,.i, ..., do].

[0029] [d s , .... do] may also be represented as, [d s , ..., d 0 ] = d 0 + dj -r 0 + d 2 -r 0 -rj+... + d s · r 0 ri -r 2 -... -r s .].

[0030] A FFT operation may be implemented as two nested loops, with an outer loop iterating over stages c, and an inner loop iterating over butterflies (or butterfly groups for stages) with multiple butterflies executing simultaneously within one stage.

[0031] FFT(k n . lt .... k 0 ) may represent the result of a FFT operation on input indexed

[fcn-i» ·— k'o] (k'i £ ° " r i> fro being the least significant digit).

[0032] A radix-r butterfly operation may be represented as,

Bsifr-l e ) ' W

where M = e r may be complex roots of one.

[0033] f c +i([ d ]o.n-c-2. k e . [A.] c _i.o ) may represent stage c output of [l d .n-c-2 > k c * MC-LO] 5 where k« may represent already processed digits and d t are digits that are yet to be processed, where k, < r l , d l < t F 0 (d 0 , .... « ' „_!> are input sampled data points. Then

FFT{k n _ lt .... A: 0 ) = F„(fc„_ 1( .... A: 0 ) .

[0034] For DIT decomposition, the FFT stage formula may be represented as:

frc M -l o ) = ¾c( w 0» '— w 'rn-t-i-l),

Where M « = U t " rlt-r- ' ¾([flo.n-e- s. «. [*]e-i.e ) .

[0035] For DIF decomposition, the FFT stage formula may be represented as:

where ¾ = f (([ d ]o,n-c-3.i'. Mc-i.o).

[0036] DIT decompositions may lead to digit reverse order of the input data points, and DIF decomposition may lead to digit reverse order of the output data points.

[0037] DIF and DIT may differ above in whether multiplication by twiddle factors is performed before or after the butterfly operation.

[0038] For the above mentioned DIF, a radix- r c butterfly in stage c utilizes inputs of the index number -Α ' η-ι> ·— k e . k e . 1 , ..., k 0 ' ] f where k e varies from 0 to r c 1 . Then the radix- r c butterfly in stage c may be represented as [k n _ it .... A.- c+1 , k e _ lt .... k 0 ] .

[0039] According to an embodiment of the present disclosure, memory 220 may be a random access memory (RAM).

[0040] According to an embodiment of the present disclosure, memory 220 may be a multi- bank memory with r n-i banks to allow pipelining butterfly execution. A memory having multiple memory banks may have independent I/O ports and buses for each memory bank, such that multiple memory banks may be accessed (for example, in read and write operations) concurrently. Alternatively, each of the multiple memory banks may be a group of memory locations, and memory 220 may allow generally simultaneous access of multiple memory banks, by encoding, aggregating, staggering, or interleaving accesses on shared memory I/O ports and buses.

[0041] According to an embodiment of the present disclosure, PU 240 may include a processor with general processing capabilities, or specialized hardware. PU 240 may process the data sequentially, in parallel, staggered, interleaved, or in various process to prioritize between multiple butterfly stage operations, to maximize data throughput and minimize a waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[0042] The memory banks in the embodiments of the present disclosure may include any of the above and other possible grouping of memory locations. Memory bank assignments in the embodiments of the present disclosure may include any memory group identification, indexes, addresses, or labels, that may be used for controlling access to a group of memory locations.

[0043] Each radix-*" butterfly operation may include r memory reads and J * memory writes. Memory bank and address assignments may be generated depending on the number of sampled data points, and adjusted in run-time.

[0044] For example, "i(fc n -n .... k 0 ) may represent bank assignment and oW-'n-u ···. k 0 ) may represent address assignment within the bank for butterfly index number [k n -t> ·«. k B ].

[0045] If for example, a(k n -v ···» k 0 ) = [ fc „_ 2 , ···· ^ ' οΐ , and

ictttf η-i» ···· dc + i. dc-i> »·. 0 L d) = [d„_„ .... d c+1 , d, a'c.j, .... d 0 ].

then, inttn-n -. ^. W = m(/ e ([fc„_i, ... , A.- c+1 , A. ' c -iA. ' i], A:o)).

[0046] While there may be a dependency between subsequent stages, butterflies within each one stage are independent from each other and therefore may be calculated in arbitrary order.

[0047] Suppose <7c butterflies are run simultaneously in stage c . Stage — 1 obviously has only one butterfly run simultaneously, because only r n-i memory banks are available. For any stage c that runs Qc butterflies generally simultaneously, the inner loop may iterate over butterfly

groups indexed

[0048] ' may represent the k c+1 'th butterfly executed in [fcn--.' "" ^c+2 ' fc c+i' ^c-i'—' ^o] 'th iteration of loop iterating over butterfly groups in stage c 5 where I ¾ I L J

[0049] fcc+i may be represented as being split into [ fc c+i « fc e+i] , and k c+1 is used as a part of butterfly group index number, while k ' e-t is used to enumerate butterflies within the group.

[0050] The traverse order (the sequence order of the N data sample entries in the memory to process) for all stages may be represented as:

•••» fcc+2» ^c+l» ^C 1» k " c-i' ·■■» fc 0 ) = [A-' n _j, .... k * e+2 , ^ ' e+l» ^ -l' ···· ^'θ] <

[0051] W c ([fcji-i» " ' » ^o] ) may represent the memory bank assignment for use in iteration 0 f butterfly loop in stage ,

[0052] <? radix- r c butterflies may run in parallel, using the multiple memory banks, may be represented as:

[0053] A conflict-free bank assignment that allows multiple butterflies of small radix stages generally simultaneously in a mixed-radix FFT operation with traverse order may be represented as: m(fc n _ 1 ,... : 0 ) ' k t J mod r n _ x

where Qi may represent constants that depend on radixes chosen for the stages of the FFT operation.

[0054] Various modifications to the above presented continuous-flow conflict-free mixed- radix FFT operation in multi-bank memory are possible.

FFT in DSP utilizing a dual-port memory

[0055] FIG. 3 illustrates an exemplary processing device 300 according to an embodiment of the present disclosure.

[0056] According to an embodiment of the present disclosure, a processing device 300 that performs a mixed-radix FFT using a dual-port multi-bank memory is described.

[0057] The processing device 300 may be connected to a memory 320 with (R number of) multiple memory banks 320.0 to 320.R-1, containing memory capacity for storing N entries of complex data points for processing. The processing device 300 may include an address generator (AGU) 310 that generates the memory address assignments and a traverse order for the data according to the mixed-radix settings. The AGU 310 is connected to an interface 380, which reads the data from the memory 320 according to the memory address assignments in the traversal order generated by the AGU 310 and writes the processed data back into the memory 320. The AGU 310 is connected to a processor (PU) 340, which processes the data of more than one butterfly stage operations of the FFT, prior to the interface 380 writing the processed data back to the memory 320.

[0058] Memory 320 here may be a dual-port memory, with one set of input (write) port and another set of output (read) port, which may allow memory 320 to perform one read operation and one write operation concurrently or generally simultaneously (for example, in a single clock period).

[0059] For example, in a FFT operation of data sampled at N points, where N=r -R" '1 , 2r < R, R is divisible by r, and R, r are radixes of butterflies in the FFT. Furthermore, N= o 'rj Ό - ...r„. / , R = r - q, then, ro =r, and

[0060] The FFT operation may be performed using the processing device 300, by implementing addressing strategy that allows execution of butterflies simultaneously in radix-*" stage. Executing multiple butterflies simultaneously allows the FFT operation to access multiple memory banks generally simultaneously, and stage the parallel calculations in PU 340 generally simultaneously, to reduce waiting time associated with sequential processing of butterflies in the FFT. This makes radix-* " calculation <? times faster in speed performance.

[0061] The AGU 310 may use traverse order Tc , which may be represented as:

7 * e (kn-l> ···· ^c+2» ^c+l« ^ " c+l» k'c-i' ···· λ'ο = [k ' n-l» ···» k ' c +-» ^c+l" c -i> ...» A: 0 |

and bank assignment, may be represented as: to provide conflict-free memory access.

[0062] For radix-^ stage indexed c , conflicts may only occur between wings of one butterfly.

[0063] For example, if a conflict could occur on wings k c . k c f i.e.

then Y c = k ~ c {mod R).

[0064] Because k c ,k c < R t then k c = k c . Thus, conflicts in radix-R stages may be prevented.

[0065] In a similar manner, conflicts in radix-*" stage within one butterfly may be prevented.

[0066] For another example, if two butterflies in the same butterfly group in radix-*" stage could have a conflict, i.e.

m .... 2 . k~l · q + k 1 . k 0 ) = in(fc n _i, .... k . l · q + k lt k 0 ) }

where ^. k^ < q f

then k t + k 0 - q = k t + k a · q(mod R) ,

[0067] Because k 0 . k 0 < r f then *·Ί = k lt k 0 = Ar 0 > Thus, conflict may be prevented.

[0068] According to an embodiment of the present disclosure, values of « and *" above, may be adjusted at run-time to use one FFT processing device to calculate transforms (and reverse transforms) of different sizes. For example, depending on the size of the data sample N, available memory banks, available memory I/O ports or I/O bandwidth, processor speed, or other factors, values of " and *" may be adjusted at run-time to maximize the data throughput in the processing device, and to minimize a waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[0069] According to an embodiment of the present disclosure, PU 340 may include a processor with general processing capabilities, or specialized hardware. PU 340 may process the data sequentially, in parallel, staggered, interleaved, or in various process to prioritize between multiple butterfly stage operations, to maximize data throughput and minimize waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[0070] Table 1 below illustrates simulated performance gain in FFT operation using the above method and processing device.

Table 1. Estimated clocks count performance

FFT processor utilizing self-sorting addressing

[0071] FIG. 4 illustrates an exemplary processing device 400 according to an embodiment of the present disclosure.

[0072] According to an embodiment of the present disclosure, a processing device 400 that performs a mixed-radix FFT with self-sorting using a dual-port multi-bank memory is described.

[0073] The processing device 400 may be connected to a memory 420 with (R number of) multiple memory banks 420.0 to 420.R-1, containing memory capacity for storing N entries of complex data points for processing. The processing device 400 may include an address generator (AGU) 410 that generates the memory address assignments and a traverse order for the data according to the mixed-radix settings. The AGU 410 is connected to an interface 480, which reads the data from the memory 420 according to the memory address assignments in the traversal order generated by the AGU 410 and writes the processed data back into the memory 420. The AGU 410 is connected to a processor (PU) 440, which processes the data of more than one butterfly stage operations of the FFT, prior to the interface 480 writing the processed data back to the memory 420. Additionally, a pipeline 450 connects the input interface 480 to the PU 440.

[0074] Memory 420 here may be a dual-port memory, with one set of inputs (write) port and another set of outputs (read) port, which may allow memory 420 to perform one read operation and one write operation concurrently or generally simultaneously (for example, in a single clock period).

[0075] For example, in a FFT operation of data sampled at N points, where N=r -R" '1 , 2r < R, R is divisible by r, and R, r are radixes of butterflies in the FFT. Furthermore, N= TQ T / -r^ ... τ„. / , R = r - q, then, ro =r, and

[0076] The FFT operation may be performed using the processing device 400, by

implementing an addressing strategy that allows execution of 9 butterflies simultaneously in radix- 1 " stage. Executing multiple butterflies simultaneously allows the FFT operation to access multiple memory banks generally simultaneously, and stage the parallel calculations in PU 440 generally simultaneously, to reduce waiting time associated with sequential processing of butterflies in the FFT. This makes radix- 1" calculation 9 times faster in speed performance.

[0077] DIT and DIF may lead to input or output data having reversed digit order. In order to obtain proper result, a digit reverse operation may need to be performed.

[0078] According to an embodiment of the present disclosure, a digit reverse operation may be incorporated into the operation of the processing device such that a separate digit reverse operation may not be required. At the same time, the processing device of the embodiment may launch multiple butterflies in radix- 1" stage generally simultaneously.

[0079] The AGU 410 may use bank assignment, which may be represented as:

to provide conflict-free memory access.

n + 1

[0080] T e , which may be represented as:

[0081] However, for radix-β stages, the outputs of butterflies may be transposed. The output of data indexed [w. u ] is written to memory location of data indexed l*~'» '] , where

w€ 0..r - € 0.. £j , n + 1

Starting from stage 2 , for stage c , where c≠ n - 1 > butterfly with input of data may have outputs stored in memory addresses calculated for data indexed

n + 1

[0083] Alternatively, the second 2 stages may use traverse order 7 * c t which may be represented as:

«··« ^c+S* k ' c+l' ^ ' c-l» ···· ^ ' α) = [&η-ι> · · ^c+S * ^ " c-r l» ^C 1> ^c- l' ···» ^ o] _

n + 1

[0084] And the first 2~~ stages, the outputs of butterflies may be transposed.

[0085] The output transposition may be accomplished by delaying the write operations in the butterfly stages that perform the digit reverse operations above.

[0086] According to the embodiment of the present disclosure, stages performing digit reverse operations are not in-place. Thus, it may need to be ensured that during various stage

computations, a memory location is written only after it is read by a butterfly.

[0087] For each stage c performing digit reverse operations, the correct order of read and write operations may be guaranteed by reordering butterflies within the stage, so that all butterflies with overlapping data index values of k n -i>— · ¾c+i« kc-u *··· A- ' n-c kn-e-ι· fr n _ c _3, .... A: 0 are executed sequentially in one batch, and adding pipeline 450 with delays to postpone write operations for R - V clocks, where V is pipeline delay length. Since write operations of butterflies from one batch can only change data values already read in the same batch and the butterfly loop is pipelined, the correct read and write order may be ensured.

[0088] While some of the butterfly stage operations may need to have write operations delayed, parallel execution of multiple butterfly stage operations may increase the overall FFT operation speed.

[0089] According to an embodiment of the present disclosure, pipeline 450 may include any hardware and/or software component to postpone write operations for a predetermined number of clock periods. For example, pipeline 450 may include software loop delays, or hardware components, such as flip-flops, buffers, etc., capable of postponing transfer of data. Pipeline 450 may also be located anywhere along the read or write paths between memory 420, interface 480, and PU 440.

[0090] According to an embodiment of the present disclosure, values of « and r above, may be adjusted at run-time to use one FFT processing device to calculate transforms (and reverse transforms) of different sizes. For example, depending on the size of the data sample N, available memory banks, available memory I/O ports or I/O bandwidth, processor speed, or other factors, values of « and f may be adjusted at run-time to maximize the data throughput in the processing device, and to minimize waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[0091] According to an embodiment of the present disclosure, PU 440 may include a processor with general processing capabilities, or specialized hardware. PU 440 may process the data sequentially, in parallel, staggered, interleaved, or in various processes to prioritize between multiple butterfly stage operations, to maximize data throughput and minimize waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

FFT processor utilizing single-port memories

[0092] FIG. 5 illustrates an exemplary processing device 500 according to an embodiment of the present disclosure.

[0093] According to an embodiment of the present disclosure, a processing device 500 that performs a mixed-radix FFT using a single-port multi-bank memory is described.

[0094] The processing device 500 may be connected to a memory 520 with (2R number of) multiple memory banks 520.0 to 520.2R-1, containing memory capacity for storing N entries of complex data points for processing. The processing device 500 may include an address generator unit (AGU) 510 that generates the memory address assignments and a traverse order for the data according to the mixed-radix settings. The AGU 510 is connected to an interface 580, which reads the data from the memory 520 according to the memory address assignments in the traversal order generated by the AGU 510 and writes the processed data back into the memory 520. The AGU 510 is connected to a processor (PU) 540, which processes the data of more than one butterfly stage operations of the FFT, prior to the interface 580 writing the processed data back to the memory 520.

[0095] Memory 520 here may be a single-port memory, with one set of ports for both input (write) and output (read) operations. Single-port memory may require less circuitry space.

[0096] For example, in a FFT operation of data sampled at N points, where N=r -R" '1 , 2r < R, R is divisible by r, and R, r are radixes of butterflies in the FFT. Furthermore, N= o

, R - r · q, r is even, then, ro =r, and =r„./=R.

[0097] The FFT operation may be performed using the processing device 500, by

implementing addressing strategy that allows execution of multiple butterflies simultaneously in radix- 1 " stage. Executing multiple butterflies simultaneously allows the FFT operation to access multiple memory banks generally simultaneously, and stage the parallel calculations in PU 540 generally simultaneously, to reduce waiting time associated with sequential processing of butterflies in the FFT.

[0098] The AGU 510 may be modified in order to allow use of 2R number of single-port memory banks without increase of overall memory words count.

[0099] The AGU 510 may g emory bank assignments, represented as:

and traverse order for stage 0, represented as: traverse order for other stages, represented as:

Tc (t * n- i> ···· t c +-. fc c+1 , fc c+1 » k C . lt .... k 0 ) = [fc n _ if · ·■» k c -. fc c +j., ATC+Ι · c - i> .... fc 0 j ^

[00100] If the total read and write path length between memory 520, interface 580, and PU 540 is odd (as measured in clock periods), the bank assignment »i above used with traversal orders above may ensure no memory access conflicts for FFT operations in the above configuration using a single-port memory.

[00101] Because every butterfly stage operations of radix- r stage may utilize all values of , and absence of conflicts in radix-R stage may be ensured by interleaving do mod 2 values for subsequent butterflies, the processing device may need to wait for the radix-*" stage operations to complete before launching the first radix-fl stage.

[00102] For example, if radix-# stage indexed memory conflicts might occur between read operations of different wings of one butterfly, write operations of different wings of one butterfly, or write operation of a butterfly and read operation of some subsequent butterfly, then read/write conflicts within one butterfly on wings ^'c k e may be represented as:

n-i n-i

2 ^ k t + 2k c - (k 0 mod 2)≡ 2 ^ k t + 2k c - (k 0 mod 2) (mod 2R)

t=0.:*c t=0.i*c , and k e ≡ k c (mod R), i. e.k c = fc c .

Thus, conflicts within one radix-A butterfly are prevented.

[00103] Since traverse order is used in radix-K stages and r is odd, values of interleave for subsequent butterfly stage operations. With the total read and write path length having odd length (as measured in clock periods), the processing device may ensure that any two butterflies that have read and write operations within the same clock would have different parity of k a , and therefore use banks with different parity. Therefore, conflicts between wings of different butterflies in radix-# stages may be prevented. Similarly, conflicts between the butterflies of different radix-# stages may be prevented.

[00104] For radix-* " stages, memory bank assignment for an arbitrary wing of arbitrary butterfly may be represented as: m(T 0 (k n _ 1 , .... k 2 . kl. kl). k 0 ) = ∑ k t + 2k ~ l + 2k ~ - r + 2k 0 - k 0 mod 2j* mod 2R

[00105] Data points in butterfly operation stages from one group may have overlapping index values of A- ' n-u— ' fc fc i , and may differ in &ι· &η .

[00106] Because L 2 . then

[00107] Because , index values of i o may overlap for overlapping index values of bi> fro- Thus, conflicts within one butterfly group may be prevented. [00108] Index values of k i interleave for subsequent butterfly groups. With a pipeline having odd length, it guarantees that any 2 butterfly groups that have read and write operations within the same clock have different parity of , therefore use banks with a different second bit in radix-2 representation of the bank's number. Hence there are no conflicts on wings of butterflies from different butterfly groups in radix- r stage.

[00109] According to an embodiment of the present disclosure, values of « and r above, may be adjusted at run-time to use one FFT processing device to calculate transforms (and reverse transforms) of different sizes. For example, depending on the size of the data sample N, available memory banks, available memory I/O ports or I/O bandwidth, processor speed, or other factors, values of" and i" may be adjusted at run-time to maximize the data throughput in the processing device, and to minimize the waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[00110] According to an embodiment of the present disclosure, PU 540 may include a processor with general processing capabilities, or specialized hardware. PU 540 may process the data sequentially, in parallel, staggered, interleaved, or in various process to prioritize between multiple butterfly stage operations, to maximize data throughput and minimize waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

Self-sorting FFT processor with single-port memories

[00111] FIG. 6 illustrates an exemplary processing device 600 according to an embodiment of the present disclosure.

[00112] According to an embodiment of the present disclosure, a processing device 600 that performs a mixed-radix FFT with self-sorting using a single-port multi-bank memory is described.

[00113] The processing device 600 may be connected to a memory 620 with (2R number of) multiple memory banks 620.0 to 620.2R-1, containing memory capacity for storing N entries of complex data points for processing. The processing device 600 may include an address generator (AGU) 610 that generates the memory address assignments and a traverse order for the data according to the mixed-radix settings. The AGU 610 is connected to an interface 680, which reads the data from the memory 620 according to the memory address assignments in the traversal order generated by the AGU 610 and writes the processed data back into the memory 620. The AGU 610 is connected to a processor (PU) 640, which processes the data of more than one butterfly stage operations of the FFT, prior to the interface 680 writing the processed data back to the memory 620. Additionally, a pipeline 650 connects the input interface 680 to the PU 640.

[00114] Memory 620 here may be a single-port memory, with one set of port for both input (write) and output (read) operations. Single-port memory may require less circuitry space.

[00115] For example, in a FFT operation of data sampled at N points, where N=r -R" '1 , 2r < R, R is divisible by r, and R, r are radixes of butterflies in the FFT. Furthermore, N= o -rj Τ2 · ... ·>·„./ , R = r ' q, n > 3, r is even, then, ro =r, and

[00116] The FFT operation may be performed using the processing device 600, by

implementing addressing strategy that allows execution of multiple butterflies simultaneously in radix- r stage. Executing multiple butterflies simultaneously allows the FFT operation to access multiple memory banks generally simultaneously, and stage the parallel calculations in PU 640 generally simultaneously, to reduce waiting time associated with sequential processing of butterflies in the FFT.

[00117] DIT and DIF may lead to input or output data having reversed digit order. In order to obtain proper result, a digit reverse operation may need to be performed.

[00118] According to an embodiment of the present disclosure, a digit reverse operation may be incorporated into the operation of the processing device such that a separate digit reverse operation may not be required. At the same time, the processing device of the embodiment may launch multiple butterflies in radix- 1 " stage generally simultaneously.

[00119] In the last stage for radix-r, the bank assignment may need to be invariant with respect to switching of the last digit *. ' n-i and the first digit fe o . The AGU 610 may generate bank assignment, which may be represented as:

[00120] The traverse orders generated by AGU 610 may be represented as:

Wl 'I' l l"- < e < l(n+ )/2 |,1[«:,Ρ> - M> +*l(n - «) )/2| where k° = fc B . t fc e+1 . * e -, *. , = K*3 i™d & + ¾

[00121] For radix-fl stages, the outputs of butterflies may be transposed. The output of data indexed [ .w] is written to memory location of data indexed [»'· Η '] , where tl + 1

Starting from stage 2 , for stage c , where c≠ n— 1 } butterfly with input of data

may have outputs stored in memory addresses calculated for data indexed

[b b b b b b b b ΐ·· F ~ I*- !· I* k 1

71 -1- 1

[00123] Alternatively, the first 2 ~~ stages, the outputs of butterflies may be transposed.

[00124] The output transposition may be accomplished by delaying the write operations in the butterfly stages that perform the digit reverse operations above.

[00125] According to the embodiment of the present disclosure, stages performing digit reverse operations are not in-place. Thus, it may need to be ensured that during various stage

computations, a memory location is written only after it is read by a butterfly.

[00126] Furthermore, butterfly stage operations may be grouped into batches of size 2R.

Read/write conflicts may be prevented by interleaving in some index values . For example, in stage c , one size 2R batch may be formed from two size R batches covering all index values of k c , fc„_ c _ ± t such that index values of k, interleave between the two size R batches.

[00127] For example, batch 1 having R butterflies (Butterfly 1.0, Butterfly 1.1, .... Butterfly

1. R-2, Butterfly l.R-1), and batch 2 having R butterflies (Butterfly 2.0, Butterfly 2.1, Butterfly

2. R-2, Butterfly 2.R-1), can be interleaved to form a size 2R batch (Butterfly 1.0, Butterfly 2.0, Butterfly 1.1, Butterfly 2.1, Butterfly l.R-2, Butterfly 2.R-2, Butterfly l.R-1, Butterfly 2.R-1).

[00128] Similar to the processing device 400 in Fig. 4, the use of pipeline delay of length

2R— 1 - v may prevent read/write conflicts in self-sorting.

[00129] While some of the butterfly stage operations may need to have write operations delayed, parallel execution of multiple butterfly stage operations may increase the overall FFT operation speed.

[00130] According to an embodiment of the present disclosure, pipeline 650 may include any hardware and/or software component to postpone write operations for a predetermined number of clock periods. For example, pipeline 650 may include software loop delays, or hardware components, such as flip-flops, buffers, etc., capable of postponing transfer of data. Pipeline 650 may also be located any where along the read or write paths between memory 620, interface 680, and PU 640.

[00131] According to an embodiment of the present disclosure, values of n and r above, may be adjusted at run-time to use one FFT processing device to calculate transforms (and reverse transforms) of different sizes. For example, depending on the size of the data sample N, available memory banks, available memory I/O ports or I/O bandwidth, processor speed, or other factors, values of « and r may be adjusted at run-time to maximize the data throughput in the processing device, and to minimize waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[00132] According to an embodiment of the present disclosure, PU 640 may include a processor with general processing capabilities, or specialized hardware. PU 640 may process the data sequentially, in parallel, staggered, interleaved, or in various process to prioritize between multiple butterfly stage operations, to maximize data throughput and minimize waiting period for the memory or the processor, without having to increase overall circuit or power or clocking speed.

[00133] While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.