Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS, SYSTEM, AND METHOD OF COMPILING CODE FOR A PROCESSOR
Document Type and Number:
WIPO Patent Application WO/2024/079691
Kind Code:
A1
Abstract:
For example, a compiler may be configured to identify a loop nest based on a source code to be compiled into a target code to be executed by a target processor, the loop nest including a plurality of loops including at least a first loop and a second loop nested in the first loop, the first loop including at least one first-loop instruction outside the second loop; and to generate Address Generation Unit (AGU) configuration code to configure an AGU of the target processor based on the first-loop instruction, wherein the AGU configuration code is to configure a first dimension of the AGU based on the first loop and a second dimension of the AGU based on the second loop to configure a memory-access operation based on the first-loop instruction, to be performed at a start of the second loop or at an end of the second loop.

Inventors:
ZUCKERMAN MICHAEL (IL)
ZAKS AYAL (IL)
Application Number:
PCT/IB2023/060307
Publication Date:
April 18, 2024
Filing Date:
October 12, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MOBILEYE VISION TECHNOLOGIES LTD (IL)
International Classes:
G06F8/41
Other References:
SIM HYEONUK ET AL: "Mapping Imperfect Loops to Coarse-Grained Reconfigurable Architectures", IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, IEEE, USA, vol. 35, no. 7, 1 July 2016 (2016-07-01), pages 1092 - 1104, XP011614368, ISSN: 0278-0070, [retrieved on 20160616], DOI: 10.1109/TCAD.2015.2504918
HAAB MARTIN ET AL: "Automatic custom instruction identification in memory streaming algorithms", 2014 INTERNATIONAL CONFERENCE ON COMPILERS, ARCHITECTURE AND SYNTHESIS FOR EMBEDDED SYSTEMS (CASES), ACM, 12 October 2014 (2014-10-12), pages 1 - 9, XP032715684, DOI: 10.1145/2656106.2656114
Attorney, Agent or Firm:
SHICHRUR, Naim Avraham (IL)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A product comprising one or more tangible computer-readable non-transitory storage media comprising computer-executable instructions operable to, when executed by at least one processor, enable the at least one processor to cause a compiler to: identify a loop nest based on a source code to be compiled into a target code to be executed by a target processor, the loop nest comprising a plurality of loops, the plurality of loops comprising at least a first loop and a second loop nested in the first loop, wherein the first loop comprises at least one first- loop instruction, which is outside the second loop; generate Address Generation Unit (AGU) configuration code to configure an AGU of the target processor based on the first-loop instruction, wherein the AGU configuration code is to configure a first dimension of the AGU based on the first loop, and to configure a second dimension of the AGU based on the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure a memory-access operation to be performed at a start of the second loop or at an end of the second loop, wherein the memory-access operation is based on the first- loop instruction; and generate the target code based on compilation of the source code, wherein the target code is based on the AGU configuration code.

2. The product of claim 1, wherein the plurality of loops comprises a third loop nested in the first loop, the second loop is nested in the third loop, the first-loop instruction is outside the third loop, wherein the AGU configuration code is to configure a third dimension of the AGU based on the third loop, wherein the AGU configuration code is to configure the third dimension to configure the memory-access operation to be performed at the start of the second loop or at the end of the second loop.

3. The product of claim 2, wherein the third loop comprises a third-loop instruction, which is outside the second loop, wherein the AGU configuration code is to configure an other AGU of the target processor based on the third-loop instruction, wherein the AGU configuration code is to configure a first dimension of the other AGU based on the third loop, and to configure a second dimension of the other AGU based on the second loop, wherein the AGU configuration code is to configure the second dimension of the other AGU to configure an other memory-access operation to be performed at the start of the second loop or at the end of the second loop, wherein the other memory-access operation is based on the third-loop instruction.

4. The product of claim 3, wherein the instructions, when executed, cause the compiler to transform the loop nest into a transformed loop comprising the memoryaccess operation and the other memory access operation, wherein the target code is based on the transformed loop.

5. The product of claim 2, wherein the AGU configuration code is to set a Maximum (Max) parameter of the second dimension of the AGU and a Max parameter of the third dimension of the AGU based on an entry size corresponding to the first- loop instruction.

6. The product of claim 1, wherein the AGU configuration code is to set a base parameter of the AGU based on a memory pointer of the first-loop instruction, and to set a Maximum (Max) parameter of the second dimension of the AGU based on an entry size corresponding to the first-loop instruction.

7. The product of claim 1, wherein the at least one first-loop instruction comprises a pre -header instruction to be performed before a first iteration of the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure the memory-access operation to be performed only at the start of the second loop.

8. The product of claim 7, wherein the AGU configuration code is to set a Minimum (Min) parameter of the second dimension of the AGU to zero.

9. The product of claim 7, wherein the instructions, when executed, cause the compiler to, based on a determination that the pre-header instruction comprises a load operation, configure the AGU configuration code to set a step parameter of the second dimension of the AGU to zero.

10. The product of claim 7, wherein the instructions, when executed, cause the compiler to, based on a determination that the pre-header instruction comprises a store operation, configure the AGU configuration code to set a step parameter of the second dimension of the AGU based on an entry size corresponding to the pre-header instruction.

11. The product of claim 7, wherein the AGU configuration code is to set a base parameter of the AGU to a memory pointer of the pre-header instruction.

12. The product of any one of claims 1-11, wherein the at least one first- loop instruction comprises a latch instruction to be performed after a last iteration of the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure the memory-access operation to be performed only at the end of the second loop.

13. The product of claim 12, wherein the latch instruction comprises a load operation.

14. The product of claim 13, wherein the AGU configuration code is to set a base parameter of the AGU to a memory pointer of the latch instruction, to set a Minimum (Min) parameter of the second dimension of the AGU to zero, to set a Maximum (Max) parameter of the second dimension of the AGU to an entry size corresponding to the latch instruction, and to set a step parameter of the second dimension of the AGU to zero.

15. The product of claim 12, wherein the latch instruction comprises a store operation.

16. The product of claim 15, wherein the AGU configuration code is to set a base parameter of the AGU based on a first parameter value, a second parameter value and a third parameter value, wherein the first parameter value comprises an entry size corresponding to the latch instruction, the second parameter value comprises a total count of iterations over one or more loops, which are in the first loop and include the second loop, the third parameter value comprising a count of dimensions of the AGU corresponding to the one or more loops.

17. The product of claim 16, wherein the AGU configuration code is to set the base parameter, denoted Base, of the AGU as follows: Base = OrigBase + EntrySize * ([ E TripCount(L)]- lnnerDims), wherein OrigBase denotes a memory pointer of the latch instruction, EntrySize denotes the entry size, [E TripCount(L)] denotes the total count of iterations over the one or more loops, and #InnerDims denotes the count of dimensions of the AGU corresponding to the one or more loops.

18. The product of claim 15, wherein the AGU configuration code is to set a step parameter of the second dimension of the AGU based on an entry size corresponding to the latch instruction; to set a Minimum (Min) parameter of the second dimension of the AGU based on the entry size and a count of iterations in the second loop; and to set a Maximum (Max) parameter of the second dimension of the AGU based on the Min parameter of the second dimension of the AGU and the entry size.

19. The product of claim 18, wherein the AGU configuration code is to set the step parameter of the second dimension of the AGU based on an additive inverse of the entry size.

20. The product of claim 18, wherein the AGU configuration code is to set the Min parameter of the second dimension of the AGU based on a product of an additive inverse of the entry size and a subtraction result of subtracting one from the count of iterations in the second loop.

21. The product of claim 18, wherein the AGU configuration code is to set the Max parameter of the second dimension of the AGU based on a sum of the Min parameter of the second dimension of the AGU and the entry size.

22. The product of any one of claims 1-11, wherein the instructions, when executed, cause the compiler to transform the loop nest into a transformed loop comprising the memory-access operation, wherein the target code is based on the transformed loop.

23. The product of claim 22, wherein the transformed loop comprises a perfect flat loop, in which all compute operations of the loop nest are implemented in the transformed loop.

24. The product of claim 22, wherein the transformed loop comprises a fully collapsed loop comprising only a single-basic -block loop based on the plurality of loops.

25. The product of any one of claims 1-11, wherein the memory-access operation comprises a load operation or a store operation.

26. The product of any one of claims 1-11, wherein the source code comprises Open Computing Language (OpenCL) code.

27. The product of any one of claims 1-11, wherein the computer-executable instructions, when executed, cause the compiler to compile the source code into the target code according to a Low Level Virtual Machine (LLVM) based (LLVM-based) compilation scheme.

28. The product of any one of claims 1-11, wherein the target code is configured for execution by a Very Long Instruction Word (VLIW) Single Instruction/Multiple Data (SIMD) target processor.

29. The product of any one of claims 1-11, wherein the target code is configured for execution by a target vector processor.

30. A computing system comprising: at least one memory to store instructions; and at least one processor to retrieve the instructions from the memory and to execute the instructions to cause the computing system to: identify a loop nest based on a source code to be compiled into a target code to be executed by a target processor, the loop nest comprising a plurality of loops, the plurality of loops comprising at least a first loop and a second loop nested in the first loop, wherein the first loop comprises at least one first- loop instruction, which is outside the second loop; generate Address Generation Unit (AGU) configuration code to configure an AGU of the target processor based on the first-loop instruction, wherein the AGU configuration code is to configure a first dimension of the AGU based on the first loop, and to configure a second dimension of the AGU based on the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure a memory-access operation to be performed at a start of the second loop or at an end of the second loop, wherein the memory-access operation is based on the first-loop instruction; and generate the target code based on compilation of the source code, wherein the target code is based on the AGU configuration code.

31. The computing system of claim 30 comprising the target processor to execute the target code.

32. A method comprising: identifying a loop nest based on a source code to be compiled into a target code to be executed by a target processor, the loop nest comprising a plurality of loops, the plurality of loops comprising at least a first loop and a second loop nested in the first loop, wherein the first loop comprises at least one first- loop instruction, which is outside the second loop; generating Address Generation Unit (AGU) configuration code to configure an AGU of the target processor based on the first- loop instruction, wherein the AGU configuration code is to configure a first dimension of the AGU based on the first loop, and to configure a second dimension of the AGU based on the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure a memory-access operation to be performed at a start of the second loop or at an end of the second loop, wherein the memory-access operation is based on the first- loop instruction; and generating the target code based on compilation of the source code, wherein the target code is based on the AGU configuration code.

33. The method of claim 32, wherein the plurality of loops comprises a third loop nested in the first loop, the second loop is nested in the third loop, the first-loop instruction is outside the third loop, wherein the AGU configuration code is to configure a third dimension of the AGU based on the third loop, wherein the AGU configuration code is to configure the third dimension to configure the memory-access operation to be performed at the start of the second loop or at the end of the second loop.

Description:
APPARATUS, SYSTEM, AND METHOD OF COMPILING CODE FOR A PROCESSOR

CROSS REFERENCE

[0001] This Application claims the benefit of and priority from US Provisional Patent Application No. 63/415,308 entitled “APPARATUS, SYSTEM, AND METHOD OF VECTOR PROCESSING”, filed October 12, 2022, the entire disclosure of which is incorporated herein by reference.

BACKGROUND

[0002] A compiler may be configured to compile source code into target code configured for execution by a processor.

[0003] There is a need to provide a technical solution to support efficient processing functionalities.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] For simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity of presentation. Furthermore, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. The figures are listed below.

[0005] Fig. 1 is a schematic block diagram illustration of a system, in accordance with some demonstrative aspects.

[0006] Fig. 2 is a schematic illustration of a compiler, in accordance with some demonstrative aspects.

[0007] Fig, 3 is a schematic illustration of a vector processor, in accordance with some demonstrative aspects.

[0008] Fig. 4 is a schematic illustration of an execution scheme to execute a latch store operation in a loop nest, in accordance with some demonstrative aspects.

[0009] Fig. 5 is a schematic illustration of an execution scheme to execute a preheader load or store operation in a loop nest, in accordance with some demonstrative aspects.

[00010] Fig. 6 is a schematic flow-chart illustration of a method of compiling code for a processor, in accordance with some demonstrative aspects.

[00011] Fig. 7 is a schematic illustration of a product, in accordance with some demonstrative aspects.

DETAILED DESCRIPTION

[00012] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of some aspects. However, it will be understood by persons of ordinary skill in the art that some aspects may be practiced without these specific details. In other instances, well-known methods, procedures, components, units and/or circuits have not been described in detail so as not to obscure the discussion.

[00013] Some portions of the following detailed description are presented in terms of algorithms and symbolic representations of operations on data bits or binary digital signals within a computer memory. These algorithmic descriptions and representations may be the techniques used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art.

[00014] An algorithm is here, and generally, considered to be a self-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities capture the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.

[00015] Discussions herein utilizing terms such as, for example, “processing”, “computing”, “calculating”, “determining”, “establishing”, “analyzing”, “checking”, or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulate and/or transform data represented as physical (e.g., electronic) quantities within the computer’s registers and/or memories into other data similarly represented as physical quantities within the computer’s registers and/or memories or other information storage medium that may store instructions to perform operations and/or processes.

[00016] The terms “plurality” and “a plurality”, as used herein, include, for example, “multiple” or “two or more”. For example, “a plurality of items” includes two or more items.

[00017] References to “one aspect”, “an aspect”, “demonstrative aspect”, “various aspects” etc., indicate that the aspect(s) so described may include a particular feature, structure, or characteristic, but not every aspect necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one aspect” does not necessarily refer to the same aspect, although it may.

[00018] As used herein, unless otherwise specified the use of the ordinal adjectives “first”, “second”, “third” etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.

[00019] Some aspects, for example, may capture the form of an entirely hardware aspect, an entirely software aspect, or an aspect including both hardware and software elements. Some aspects may be implemented in software, which includes but is not limited to firmware, resident software, microcode, or the like.

[00020] Furthermore, some aspects may capture the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For example, a computer-usable or computer-readable medium may be or may include any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.

[00021] In some demonstrative aspects, the medium may be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.

[00022] In some demonstrative aspects, a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements, for example, through a system bus. The memory elements may include, for example, local memory employed during actual execution of the program code, bulk storage, and cache memories which may provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.

[00023] In some demonstrative aspects, input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers. In some demonstrative aspects, network adapters may be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices, for example, through intervening private or public networks. In some demonstrative aspects, modems, cable modems and Ethernet cards are demonstrative examples of types of network adapters. Other suitable components may be used.

[00024] Some aspects may be used in conjunction with various devices and systems, for example, a computing device, a computer, a mobile computer, a non-mobile computer, a server computer, or the like.

[00025] As used herein, the term "circuitry" may refer to, be part of, or include, an Application Specific Integrated Circuit (ASIC), an integrated circuit, an electronic circuit, a processor (shared, dedicated or group), and/or memory (shared. Dedicated, or group), that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some aspects, some functions associated with the circuitry may be implemented by, one or more software or firmware modules. In some aspects, circuitry may include logic, at least partially operable in hardware.

[00026] The term “logic” may refer, for example, to computing logic embedded in circuitry of a computing apparatus and/or computing logic stored in a memory of a computing apparatus. For example, the logic may be accessible by a processor of the computing apparatus to execute the computing logic to perform computing functions and/or operations. In one example, logic may be embedded in various types of memory and/or firmware, e.g., silicon blocks of various chips and/or processors. Logic may be included in, and/or implemented as part of, various circuitry, e.g., processor circuitry, control circuitry, and/or the like. In one example, logic may be embedded in volatile memory and/or non-volatile memory, including random access memory, read only memory, programmable memory, magnetic memory, flash memory, persistent memory, and the like. Logic may be executed by one or more processors using memory, e.g., registers, stuck, buffers, and/or the like, coupled to the one or more processors, e.g., as necessary to execute the logic.

[00027] Reference is now made to Fig. 1, which schematically illustrates a block diagram of a system 100, in accordance with some demonstrative aspects.

[00028] As shown in Fig. 1, in some demonstrative aspects system 100 may include a computing device 102.

[00029] In some demonstrative aspects, device 102 may be implemented using suitable hardware components and/or software components, for example, processors, controllers, memory units, storage units, input units, output units, communication units, operating systems, applications, or the like.

[00030] In some demonstrative aspects, device 102 may include, for example, a computer, a mobile computing device, a non-mobile computing device, a laptop computer, a notebook computer, a tablet computer, a handheld computer, a Personal Computer (PC), or the like.

[00031] In some demonstrative aspects, device 102 may include, for example, one or more of a processor 191, an input unit 192, an output unit 193, a memory unit 194, and/or a storage unit 195. Device 102 may optionally include other suitable hardware components and/or software components. In some demonstrative aspects, some or all of the components of one or more of device 102 may be enclosed in a common housing or packaging, and may be interconnected or operably associated using one or more wired or wireless links. In other aspects, components of one or more of device 102 may be distributed among multiple or separate devices.

[00032] In some demonstrative aspects, processor 191 may include, for example, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), one or more processor cores, a single-core processor, a dual-core processor, a multiple-core processor, a microprocessor, a host processor, a controller, a plurality of processors or controllers, a chip, a microchip, one or more circuits, circuitry, a logic unit, an Integrated Circuit (IC), an Application-Specific IC (ASIC), or any other suitable multipurpose or specific processor or controller. Processor 191 may execute instructions, for example, of an Operating System (OS) of device 102 and/or of one or more suitable applications.

[00033] In some demonstrative aspects, input unit 192 may include, for example, a keyboard, a keypad, a mouse, a touch-screen, a touch-pad, a track-ball, a stylus, a microphone, or other suitable pointing device or input device. Output unit 193 may include, for example, a monitor, a screen, a touch-screen, a flat panel display, a Light Emitting Diode (LED) display unit, a Liquid Crystal Display (LCD) display unit, a plasma display unit, one or more audio speakers or earphones, or other suitable output devices.

[00034] In some demonstrative aspects, memory unit 194 includes, for example, a Random Access Memory (RAM), a Read Only Memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units. Storage unit 195 may include, for example, a hard disk drive, a Solid State Drive (SSD), or other suitable removable or non-removable storage units. Memory unit 194 and/or storage unit 195, for example, may store data processed by device 102.

[00035] In some demonstrative aspects, device 102 may be configured to communicate with one or more other devices via at least one network 103, e.g., a wireless and/or wired network.

[00036] In some demonstrative aspects, network 103 may include a wired network, a local area network (LAN), a wireless network, a wireless LAN (WLAN) network, a radio network, a cellular network, a WiFi network, an IR network, a Bluetooth (BT) network, and the like.

[00037] In some demonstrative aspects, device 102 may be configured to perform and/or to execute one or more operations, modules, processes, procedures and/or the like, e.g., as described herein.

[00038] In some demonstrative aspects, device 102 may include a compiler 160, which may be configured to generate a target code 115, for example, based on a source code 112, e.g., as described below.

[00039] In some demonstrative aspects, compiler 160 may be configured to translate the source code 112 into the target code 115, e.g., as described below.

[00040] In some demonstrative aspects, compiler 160 may include, or may be implemented as, software, a software module, an application, a program, a subroutine, instructions, an instruction set, computing code, words, values, symbols, and/or the like.

[00041] In some demonstrative aspects, the source code 112 may include computer code written in a source language.

[00042] In some demonstrative aspects, the source language may include a programing language. For example, the source language may include a high-level programming language, for example, such as, C language, C++ language, and/or the like.

[00043] In some demonstrative aspects, the target code 115 may include computer code written in a target language.

[00044] In some demonstrative aspects, the target language may include a low-level language, for example, such as, assembly language, object code, machine code, or the like.

[00045] In some demonstrative aspects, the target code 115 may include one or more object files, e.g., which may create and/or form an executable program.

[00046] In some demonstrative aspects, the executable program may be configured to be executed on a target computer. For example, the target computer may include a specific computer hardware, a specific machine, and/or a specific operating system.

[00047] In some demonstrative aspects, the executable program may be configured to be executed on a processor 180, e.g., as described below.

[00048] In some demonstrative aspects, processor 180 may include a vector processor 180, e.g., as described below. In other aspects, processor 180 may include any other type of processor.

[00049] Some demonstrative aspects are described herein with respect to a compiler, e.g., compiler 160, configured to compile source code 112 into target code 115 configured to be executed by a vector processor 180, e.g., as described below. In other aspects, a compiler, e.g., compiler 160, configured to compile source code 112 into target code 115 configured to be executed by any other type of processor 180.

[00050] In some demonstrative aspects, processor 180 may be implemented as part of device 102.

[00051] In other aspects, processor 180 may be implemented as part of any other device, e.g., separate from device 102.

[00052] In some demonstrative aspects, vector processor 180 (also referred to as an “array processor”) may include a processor, which may be configured to process an entire vector in one instruction, e.g., as described below.

[00053] In other aspects, the executable program may be configured to be executed on any other additional or alternative type of processor.

[00054] In some demonstrative aspects, the vector processor 180 may be designed to support high-performance image and/or vector processing. For example, the vector processor 180 may be configured to processes 1/2/3/4D arrays of fixed point data and/or floating point arrays, e.g., very quickly and/or efficiently.

[00055] In some demonstrative aspects, the vector processor 180 may be configured to process arbitrary data, e.g., structures with pointers to structures. For example, the vector processor 180 may include a scalar processor to compute the non-vector data, for example, assuming the non-vector data is minimal.

[00056] In some demonstrative aspects, compiler 160 may be implemented as a local application to be executed by device 102. For example, memory unit 194 and/or storage unit 195 may store instructions resulting in compiler 160, and/or processor 191 may be configured to execute the instructions resulting in compiler 160 and/or to perform one or more calculations and/or processes of compiler 160, e.g., as described below.

[00057] In other aspects, compiler 160 may include a remote application to be executed by any suitable computing system, e.g., a server 170.

[00058] In some demonstrative aspects, server 170 may include at least a remote server, a web-based server, a cloud server, and/or any other server.

[00059] In some demonstrative aspects, the server 170 may include a suitable memory and/or storage unit 174 having stored thereon instructions resulting in compiler 160, and a suitable processor 171 to execute the instructions, e.g., as descried below.

[00060] In some demonstrative aspects, compiler 160 may include a combination of a remote application and a local application.

[00061] In one example, compiler 160 may be downloaded and/or received by the user of device 102 from another computing system, e.g., server 170, such that compiler 160 may be executed locally by users of device 102. For example, the instructions may be received and stored, e.g., temporarily, in a memory or any suitable short-term memory or buffer of device 102, e.g., prior to being executed by processor 191 of device 102.

[00062] In another example, compiler 160 may include a client-module to be executed locally by device 102, and a server module to be executed by server 170. For example, the client-module may include and/or may be implemented as a local application, a web application, a web site, a web client, e.g., a Hypertext Markup Language (HTML) web application, or the like.

[00063] For example, one or more first operations of compiler 160 may be performed locally, for example, by device 102, and/or one or more second operations of compiler 160 may be performed remotely, for example, by server 170.

[00064] In other aspects, compiler 160 may include, or may be implemented by, any other suitable computing arrangement and/or scheme.

[00065] In some demonstrative aspects, system 100 may include an interface 110, e.g., a user interface, to interface between a user of device 102 and one or more elements of system 100, e.g., compiler 160.

[00066] In some demonstrative aspects, interface 110 may be implemented using any suitable hardware components and/or software components, for example, processors, controllers, memory units, storage units, input units, output units, communication units, operating systems, and/or applications.

[00067] In some aspects, interface 110 may be implemented as part of any suitable module, system, device, or component of system 100.

[00068] In other aspects, interface 110 may be implemented as a separate element of system 100.

[00069] In some demonstrative aspects, interface 110 may be implemented as part of device 102. For example, interface 110 may be associated with and/or included as part of device 102.

[00070] In one example, interface 110 may be implemented, for example, as middleware, and/or as part of any suitable application of device 102. For example, interface 110 may be implemented as part of compiler 160 and/or as part of an OS of device 102.

[00071] In some demonstrative aspects, interface 110 may be implemented as part of server 170. For example, interface 110 may be associated with and/or included as part of server 170.

[00072] In one example, interface 110 may include, or may be part of a Web-based application, a web-site, a web-page, a plug-in, an ActiveX control, a rich content component, e.g., a Flash or Shockwave component, or the like.

[00073] In some demonstrative aspects, interface 110 may be associated with and/or may include, for example, a gateway (GW) 113 and/or an Application Programming Interface (API) 114, for example, to communicate information and/or communications between elements of system 100 and/or to one or more other, e.g., internal or external, parties, users, applications and/or systems.

[00074] In some aspects, interface 110 may include any suitable Graphic-User- Interface (GUI) 116 and/or any other suitable interface.

[00075] In some demonstrative aspects, interface 110 may be configured to receive the source code 112, for example, from a user of device 102, e.g., via GUI 116, and/or API 114.

[00076] In some demonstrative aspects, interface 110 may be configured to transfer the source code 112, for example, to compiler 160, for example, to generate the target code 115, e.g., as described below.

[00077] Reference is made to Fig. 2, which schematically illustrates a compiler 200, in accordance with some demonstrative aspects. For example, compiler 160 (Fig. 1) may be implement one or more elements of compiler 200, and/or may perform one or more operations and/or functionalities of compiler 200.

[00078] In some demonstrative aspects, as shown in Fig. 2, compiler 200 may be configured to generate a target code 233, for example, by compiling a source code 212 in a source language.

[00079] In some demonstrative aspects, as shown in Fig. 2, compiler 200 may include a front-end 210 configured to receive and analyze the source code 212 in the source language.

[00080] In some demonstrative aspects, front-end 210 may be configured to generate an intermediate code 213, for example, based on the source code 212.

[00081] In some demonstrative aspects, intermediate code 213 may include a lower level representation of the source code 212.

[00082] In some demonstrative aspects, front-end 210 may be configured to perform, for example, lexical analysis, syntax analysis, semantic analysis, and/or any other additional or alternative type of analysis, of the source code 212.

[00083] In some demonstrative aspects, front-end 210 may be configured to identify errors and/or problems with an outcome of the analysis of the source code 212. For example, front-end 210 may be configured to generate error information, e.g., including error and/or warning messages, for example, which may identify a location in the source code 212, for example, where an error or a problem is detected.

[00084] In some demonstrative aspects, as shown in Fig. 2, compiler 200 may include a middle-end 220 configured to receive and process the intermediate code 213, and to generate an adjusted, e.g., optimized, intermediate code 223.

[00085] In some demonstrative aspects, middle-end 220 may be configured to perform one or more adjustment, e.g., optimizations, to the intermediate code 213, for example, to generate the adjusted intermediate code 223.

[00086] In some demonstrative aspects, middle-end 220 may be configured to perform the one or more optimizations on the intermediate code 213, for example, independent of a type of the target computer to execute the target code 233.

[00087] In some demonstrative aspects, middle-end 220 may be implemented to support use of the optimized intermediate code 223, for example, for different machine types.

[00088] In some demonstrative aspects, middle-end 220 may be configured to optimize the intermediate representation of the intermediate code 223, for example, to improve performance and/or quality of the produced target code 233.

[00089] In some demonstrative aspects, the one or more optimizations of the intermediate code 213, may include, for example, inline expansion, dead-code elimination, constant propagation, loop transformation, parallelization, and/or the like.

[00090] In some demonstrative aspects, as shown in Fig. 2, compiler 200 may include a back-end 230 configured to receive and process the adjusted intermediate code 213, and to generate the target code 233 based on the adjusted intermediate code 213.

[00091] In some demonstrative aspects, back-end 230 may be configured to perform one or more operations and/or processes, which may be specific for the target computer to execute the target code 233. For example, back-end 230 may be configured to process the optimized intermediate code 213 by applying to the adjusted intermediate code 213 analysis, transformation, and/or optimization operations, which may be configured, for example, based on the target computer to execute the target code 233.

[00092] In some demonstrative aspects, the one or more analysis, transformation, and/or optimization operations applied to the adjusted intermediate code 213 may include, for example, resource and storage decisions, e.g., register allocation, instruction scheduling, and/or the like.

[00093] In some demonstrative aspects, the target code 233 may include targetdependent assembly code, which may be specific to the target computer and/or a target operating system of the target computer, which is to execute the target code 233.

[00094] In some demonstrative aspects, the target code 233 may include targetdependent assembly code for a processor, e.g., vector processor 180 (Fig. 1).

[00095] In some demonstrative aspects, compiler 200 may include a Vector Micro- Code Processor (VMP) Open Computing Language (OpenCL) compiler, e.g., as described below. In other aspects, compiler 200 may include, or may be implemented as part of, any other type of vector processor compiler.

[00096] In some demonstrative aspects, the VMP OpenCL compiler may include a Low Level Virtual Machine (LLVM) based (LLVM-based) compiler, which may be configured according to an LLVM-based compilation scheme, for example, to lower OpenCL C-code to VMP accelerator assembly code, e.g., suitable for execution by vector processor 180 (Fig. 1).

[00097] In some demonstrative aspects, compiler 200 may include one or more technologies, which may be required to compile code to a format suitable for a VMP architecture, e.g., in addition to open-sourced LLVM compiler passes.

[00098] In some demonstrative aspects, FE 210 may be configured to parse the OpenCL C-code and to translate it, e.g., through an Abstract Syntax Tree (AST), for example, into an LLVM Intermediate Representation (IR).

[00099] In some demonstrative aspects, compiler 200 may include a dedicated API, for example, to detect a correct pattern for compiler pattern matching, for example, suitable for the VMP. For example, the VMP may be configured as a Complex Instruction Set Computer (CISC) machine implementing a very complex Instruction Set Architecture (ISA), which may be hard to target from standard C code. Accordingly, compiler pattern matching may not be able to easily detect the correct pattern, and for this case the compiler may require a dedicated API.

[000100] In some demonstrative aspects, FE 210 may implement one or more vendor extension built-ins, which may target VMP-specific ISA, for example, in addition to standard OpenCL built-ins, which may be optimized to a VMP machine.

[000101] In some demonstrative aspects, FE 210 may be configured to implement OpenCL structures and/or work item functions.

[000102] In some demonstrative aspects, ME 220 may be configured to process LLVM IR code, which may be general and target-independent, for example, although it may include one or more hooks for specific target architectures.

[000103] In some demonstrative aspects, ME 220 may perform one or more custom passes, for example, to support the VMP architecture, e.g., as described below.

[000104] In some demonstrative aspects, ME 220 may be configured to perform one or more operations of a Control Flow Graph (CFG) Linearization analysis, e.g., as described below.

[000105] In some demonstrative aspects, the CFG Linearization analysis may be configured to linearize the code, for example, by converting if-statements to select patterns, for example, in case VMP vector code does not support standard control flow.

[000106] In one example, ME 220 may receive a given code, e.g., as follows:

If (x > 0) {

A = A + 5; } else {

B = B * 2;

}

According to this example, ME 220 may be configured to apply the CFG Linearization analysis to the given code, e.g., as follows: tmpA = A + 5; tmpB = B * 2; mask = x > 0;

A = Select mask, tmpA, A

B = Select not mask, tmpB, B

Example (1)

[000107] In some demonstrative aspects, ME 220 may be configured to perform one or more operations of an auto-vectorization analysis, e.g., as described below.

[000108] In some demonstrative aspects, the auto-vectorization analysis may be configured to vectorize, e.g., auto-vectorize, a given code, e.g., to utilize vector capabilities of the VMP.

[000109] In some demonstrative aspects, ME 220 may be configured to perform the auto-vectorization analysis, for example, to vectorize code in a scalar form. For example, some or all operations of the auto-vectorization analysis may not be performed, for example, in case the code is already provided in a vectorized form.

[000110] In some demonstrative aspects, for example, in some use cases and/or scenarios, a compiler may not always be able to auto-vectorize a code, for example, due to data dependencies between loop iterations.

[000111] In one example, ME 220 may receive a given code, e.g., as follows: char* a,b,c; for (int i=0; i < 2048; i++) { a[i]=b[i]+c[i];

} According to this example, ME 220 may be configured to perform the CFG autovectorization analysis by applying a first conversion, e.g., as follows: char* a,b,c; for (int i=0; i < 2048; i+=32) { a[i.i+31 ]=b [i ...i+3 l]+c[i.. ,i+31];

}

Example (2a)

For example, ME 220 may be configured to perform the CFG auto-vectorization analysis by applying a second conversion, for example, following the first conversion, e.g., as follows: char32* a,b,c; for (int i=0; i < 64; i++) { a[i]=b[i]+c[i];

}

Example (2b)

[000112] In some demonstrative aspects, ME 220 may be configured to perform one or more operations of a Scratch Pad Memory Loop Access Analysis (SPMLAA), e.g., as described below.

[000113] In some demonstrative aspects, the SPMLAA may define Processing Blocks (PB), e.g., that should be outlined and compiled for VMP later.

[000114] In some demonstrative aspects, the processing blocks may include accelerated loops, which may be executed by the vector unit of the VMP.

[000115] In some demonstrative aspects, a PB, e.g., each PB, may include memory references. For example, some or all memory accesses may refer to local memory banks.

[000116] In some demonstrative aspects, the VMP may enable access to memory banks through AGUs, e.g., AGUs 320 as described below with reference to Fig. 3, and Scatter Gather units (SG). [000117] In some demonstrative aspects, the AGUs may be pre-configured, e.g., before loop execution. For example, a loop trip count may be calculated, e.g., ahead of running a processing block.

[000118] In some demonstrative aspects, image references, e.g., some or all image references, may be created at this stage, and may be followed by calculation of strides and offsets, e.g., per dimension for each reference.

[000119] In some demonstrative aspects, ME 220 may be configured to perform one or more operations of an AGU planner analysis, e.g., as described below.

[000120] In some demonstrative aspects, the AGU Planner analysis may include iterator assignment, which may cover image references, e.g., all image references, from the entire Processing Block.

[000121] In some demonstrative aspects, an iterator may cover a single reference or a group of references.

[000122] In some demonstrative aspects, one or more memory references may be coalesced and/or reuse a same access through shuffle instructions, and/or saving values read from previous iterations.

[000123] In some demonstrative aspects, other memory references, e.g., which have no linear access pattern, may be handled using a Scatter-Gather (SG) unit, which may have a performance penalty, e.g., as it may require maintaining indices and/or masks.

[000124] In some demonstrative aspects, a plan may be configured as an arrangement of iterators in a processing block. For example, a processing block may have multiple plans, e.g., theoretically.

[000125] In some demonstrative aspects, the AGU Planner analysis may be configured to build all possible plans for all PBs, and to select a combination, e.g., a best combination, e.g., from all valid combinations.

[000126] In some demonstrative aspects, a total number of iterators in a valid combination may be limited, e.g., not to exceed a number of available AGUs on a VMP.

[000127] In some demonstrative aspects, one or more parameters, e.g., including stride, width and/or base, may be defined for an iterator, e.g., for each iterator for example, as part of the AGU Planner analysis. For example, min-max ranges for the iterators may be defined in a dimension, e.g., in each dimension, for example, as part of the AGU Planner analysis.

[000128] In some demonstrative aspects, the AGU Planner analysis may be configured to track and evaluate a memory reference, e.g., each memory reference, to an image, e.g., to understand its access pattern.

[000129] In one example, according to Examples 2a/2b, the image 'a' which is the base address, may be accessed with steps of 32 bytes for 64 iterations.

[000130] In some demonstrative aspects, the LLVM may include a scalar evaluation analysis (SCEV), which may compute an access pattern, e.g., to understand every image reference.

[000131] In some demonstrative aspects, ME 220 may utilize masking capabilities of the AGUs, for example, to avoid maintaining an induction variable, which may have a performance penalty.

[000132] In some demonstrative aspects, ME 220 may be configured to perform one or more operations of a rewrite analysis, e.g., as described below.

[000133] In some demonstrative aspects, the rewrite analysis may be configured to transform the code of a processing block, for example, while setting iterators and/or modifying memory access instructions.

[000134] In some demonstrative aspects, setting of the iterators, e.g., of all iterators, may be implemented in IR in target- specific intrinsics. For example, the setting of the iterators may reside in a pre-header of an outermost loop.

[000135] In some demonstrative aspects, the rewrite analysis may include loop- perfectization analysis, e.g., as described below.

[000136] In some demonstrative aspects, the code may be compiled with a target that substantially all calculations should be executed inside the innermost loop.

[000137] For example, the loop-perfectization analysis may hoist instructions, e.g., to move into a loop an operation performed after a last iteration of the loop.

[000138] For example, the loop-perfectization analysis may sink instructions, e.g., to move into a loop an operation performed before a first iteration of the loop. [000139] For example, the loop-perfectization analysis may hoist instructions and/or sink instructions, for example, such that substantially all instructions are moved from outer loops to the innermost loops.

[000140] For example, the loop-perfectization analysis may be configured to provide a technical solution to support VMP iterators, e.g., to work on perfectly nested loops only.

[000141] For example, the loop-perfectization analysis may result in a situation where there are no instructions between the “for” statements that compose the loop, e.g., to support VMP iterators, which cannot emulate such cases.

[000142] In some demonstrative aspects, the loop-perfectization analysis may be configured to collapse a nested loop into a single collapsed loop.

[000143] In one example, ME 220 may receive a given code, e.g., as follows: for (int i = 0; i < N; i++) { int sum = 0; for (intj = 0; j < M; j++)

{ sum += a[j + stride * i] ;

} res[i] = sum;

}

According to this example, ME 220 may be configured to perform the loop- perfectization analysis to collapse the nested loop in the code to a single collapsed loop, e.g., as follows: for (int k = 0; k < N * M; k++) { sum = (k % M == 0 ? 0 : sum); sum += a[k % M + stride * ( k / M )] ; res[k/M] = sum;

} Example (3)

[000144] In some demonstrative aspects, ME 220 may be configured to perform one or more operations of a Vector Loop Outlining analysis, e.g., as described below.

[000145] In some demonstrative aspects, the Vector Loop Outlining analysis may be configured to divide a code between a scalar subsystem and a vector subsystem, e.g., vector processing block 310 (Fig. 3) and scalar processor 330 (Fig. 3) as described below with reference to Fig. 3.

[000146] In some demonstrative aspects, the VMP accelerator may include the scalar and/or vector subsystems, e.g., as described below. For example, each of the subsystems may have different compute units/processors. Accordingly, a scalar code may be compiled on a scalar compiler, e.g., an SSC compiler, and/or an accelerated vector code may run on the VMP vector processor.

[000147] In some demonstrative aspects, the Vector Loop Outlining analysis may be configured to create a separate function for a loop body of the accelerated vector code. For example, these functions may be marked for the VMP and/or may continue to the VMP backend, for example, while the rest of the code may be compiled by the SSC compiler.

[000148] In some demonstrative aspects, one or more parts of a vector loop, e.g., configuration of the vector unit and/or initialization of vector registers, may be performed by a scalar unit. However, these parts may be performed in a later stage, for example, by performing backpatching into the scalar code, e.g., as the scalar code may still be in LLVM IR before processing by the SSC compiler.

[000149] In some demonstrative aspects, BE 230 may be configured to translate the LLVM IR into machine instructions. For example, the BE 230 may not be target agnostic and may be familiar with target- specific architecture and optimizations, e.g., compared to ME 220, which may be agnostic to a target- specific architecture.

[000150] In some demonstrative aspects, BE 230 may be configured to perform one or more analyses, which may be specific to a target machine, e.g., a VMP machine, to which the code is lowered, e.g., although BE 230 may use common LLVM.

[000151] In some demonstrative aspects, BE 230 may be configured to perform one or more operations of an instruction lowering analysis, e.g., as described below. [000152] In some demonstrative aspects, the instruction lowering analysis may be configured to translate LLVM IR into target-specific instructions Machine IR (MIR), for example, by translating the LLVM IR into a Directed Acyclic Graph (DAG).

[000153] In some demonstrative aspects, the DAG may go through a legalization process of instructions, for example, based on the data types and/or VMP instructions, which may be supported by a VMP HW.

[000154] In some demonstrative aspects, the instruction lowering analysis may be configured to perform a process of pattern-matching, e.g., after the legalization process of instructions, for example, to lower a node, e.g., each node, in the DAG, for example, into a VMP-specific machine instruction.

[000155] In some demonstrative aspects, the instruction lowering analysis may be configured to generate the MIR, for example, after the process of pattern-matching.

[000156] In some demonstrative aspects, the instruction lowering analysis may be configured to lower the instruction according to machine Application Binary Interface (AB I) and/or calling conventions.

[000157] In some demonstrative aspects, BE 230 may be configured to perform one or more operations of a unit balancing analysis, e.g., as described below.

[000158] In some demonstrative aspects, the unit balancing analysis may be configured to balance instructions between VMP compute units, e.g., data processing units 316 (Fig. 3) as described below with reference to Fig. 3.

[000159] In some demonstrative aspects, the unit balancing analysis may be familiar with some or all available arithmetic transformations, and/or may perform transformations according to an optimal algorithm.

[000160] In some demonstrative aspects, BE 230 may be configured to perform one or more operations of a modulo scheduler (pipeliner) analysis, e.g., as described below.

[000161] In some demonstrative aspects, the pipeliner may be configured to schedule the instructions according to one or more constraints, e.g., data dependency, resource bottlenecks and/or any other constrains, for example, using Swing Modulo Scheduling (SMS) heuristics and/or any other additional and/or alternative heuristic. [000162] In some demonstrative aspects, the pipeliner may be configured to schedule a set, e.g., an Initiation Interval (II), of Very Long Instruction Word (VLIW) instructions that the program will iterate on, e.g., during a steady state.

[000163] In some demonstrative aspects, a performance metric, which may be based on a number of cycles a typical loop may execute, may be measured, e.g., as follows:

(Size of Input data in bytes) * II / (Bytes consumed/produced every iteration)

[000164] In some demonstrative aspects, the pipeliner may try to minimize the II, e.g., as much as possible, for example, to improve performance.

[000165] In some demonstrative aspects, the pipeliner may be configured to calculate a minimum II, and to schedule accordingly. For example, if the pipeliner fails the scheduling, the pipeliner may try to increase the II and retry scheduling, e.g., until a predefined II threshold is violated.

[000166] In some demonstrative aspects, BE 230 may be configured to perform one or more operations of a register allocation analysis, e.g., as described below.

[000167] In some demonstrative aspects, the register allocation analysis may be configured to attempt to assign a register in an efficient, e.g., optimal, way.

[000168] In some demonstrative aspects, the register allocation analysis may assign values to bypass vector registers, general purpose vector registers, and/or scalar registers.

[000169] In some demonstrative aspects, the values may include private variables, constants, and/or values that are rotated across iterations.

[000170] In some demonstrative aspects, the register allocation analysis may implement an optimal heuristic that suites one or more VMP register file (regfile) constraints. For example, in some use cases, the register allocation analysis may not use a standard LLVM register allocation.

[000171] In some demonstrative aspects, in some cases, the register allocation analysis may fail, which may mean that the loop cannot be compiled. Accordingly, the register allocation analysis may implement a retry mechanism, which may go back to the modulo scheduler and may attempt to reschedule the loop, e.g., with an increased initiation interval. For example, increasing the initiation interval may reduce register pressure, and/or may support compilation of the vector loop, e.g., in many cases.

[000172] In some demonstrative aspects, BE 230 may be configured to perform one or more operations of an SSC configuration analysis, e.g., as described below.

[000173] In some demonstrative aspects, the SSC configuration analysis may be configured to set a configuration to execute the kernel, e.g., the AGU configuration.

[000174] In some demonstrative aspects, the SSC configuration analysis may be performed at a late stage, for example, due to configurations calculated after legalization, the register allocation analysis, and/or the modulo scheduling analysis.

[000175] In some demonstrative aspects, the SSC configuration analysis may include a Zero Overhead Loop (ZOL) mechanism in the vector loop. For example, the ZOL mechanism may configure a loop trip count based on an access pattern of the memory references in the loop, for example, to avoid running instructions that check the loop exit condition every iteration.

[000176] In some demonstrative aspects, a VMP Compilation Flow may include one or more, e.g., a few, steps, which may be invoked during the compilation flow in a test library (testlib), e.g., a wrapper script for compilation, execution, and/or program testing. For example, these steps may be performed outside of the LLVM Compiler.

[000177] In some demonstrative aspects, a PCB Hardware Description Language (PHDL) simulator may be implemented to perform one or more roles of an assembler, encoder, and/or linker.

[000178] In some demonstrative aspects, compiler 200 may be configured to provide a technical solution to support robustness, which may enable compilation of a vast selection of loops, with HW limitations. For example, compiler 200 may be configured to support a technical solution, which may not create verification errors.

[000179] In some demonstrative aspects, compiler 200 may be configured to provide a technical solution to support programmability, which may provide a user an ability to express code in multiple ways, which may compile correctly to the VMP architecture.

[000180] In some demonstrative aspects, compiler 200 may be configured to provide a technical solution to support an improved user-experience, which may allow the user capability to debug and/or profile code. For example, the improved user-experience may provide informative error messages, report tools, and/or a profiler.

[000181] some demonstrative aspects, compiler 200 may be configured to provide a technical solution to support improved performance, for example, to optimize a VMP assembly code and/or iterator accesses, which may lead to a faster execution. For example, improved performance may be achieved through high utilization of the compute units and usage of its complex CISC.

[000182] Reference is made to Fig. 3, which schematically illustrates a vector processor 300, in accordance with some demonstrative aspects. For example, vector processor 180 (Fig. 1) may be implement one or more elements of vector processor 300, and/or may perform one or more operations and/or functionalities of vector processor 300.

[000183] In some demonstrative aspects, vector processor 300 may include a Vector Microcode Processor (VMP).

[000184] In some demonstrative aspects, vector processor 300 may include a Wide Vector machine, for example, supporting Very Long Instruction Word (VLIW) architectures, and/or Single Instruction/Multiple Data (SIMD) architectures.

[000185] In some demonstrative aspects, vector processor 300 may be configured to provide a technical solution to support high performance for short integral types, which may be common, for example, in computer- vision and/or deep-learning algorithms.

[000186] In other aspects, vector processor 300 may include any other type of vector processor, and/or may be configured to support any other additional or alternative functionalities.

[000187] In some demonstrative aspects, as shown in Fig. 3, vector processor 300 may include a vector processing block (vector processor) 310, a scalar processor 330, and a Direct Memory Access (DMA) 340, e.g., as described below.

[000188] In some demonstrative aspects, as shown in Fig. 3, vector processing block 310 may be configured to process, e.g., efficiently process, image data and/or vector data. For example, the vector processing block 310 may be configured to use vector computation units, for example, to speed up computations. [000189] In some demonstrative aspects, scalar processor 330 may be configured to perform scalar computations. For example, the scalar processor 330 may be used as a "glue logic" for programs including vector computations. For example, some, e.g., even most, of the computation of the programs may be performed by the vector processing block 310. However, several tasks, for example, some essential tasks, e.g., scalar computations, may be performed by the scalar processor 330.

[000190] In some demonstrative aspects, the DMA 340 may be configured to interface with one or more memory elements in a chip including vector processor 300.

[000191] In some demonstrative aspects, the DMA 340 may be configured to read inputs from a main memory, and/or write outputs to the main memory.

[000192] In some demonstrative aspects, the scalar processor 330 and the vector processing block 310 may use respective local memories to process data.

[000193] In some demonstrative aspects, as shown in Fig. 3, vector processor 300 may include a fetcher and decoder 350, which may be configured to control the scalar processor 330 and/or the vector processing block 310.

[000194] In some demonstrative aspects, operations of the scalar processor 330 and/or the vector processing block 310 may be triggered by instructions stored in a program memory 352.

[000195] In some demonstrative aspects, the DMA 340 may be configured to transfer data, for example, in parallel with the execution of the program instructions in memory 352.

[000196] In some demonstrative aspects, DMA 340 may be controlled by software, e.g., via configuration registers, for example, rather than instructions, and, accordingly, may be considered as a second "thread" of execution in vector processor 300.

[000197] In some demonstrative aspects, the scalar processor 330, the vector processing block 310, and/or the DMA 340 may include one or more data processing units, for example, a set of data processing units, e.g., as described below.

[000198] In some demonstrative aspects, the data processing units may include hardware configured to preform computations, e.g., an Arithmetic Logic Unit (ALU). [000199] In one example, a data processing unit may be configured to add numbers, and/or to store the numbers in a memory.

[000200] In some demonstrative aspects, the data processing units may be controlled by commands, e.g., encoded in the program memory 352 and/or in configuration registers. For example, the configuration registers may be memory mapped, and may be written by the memory store commands of the scalar processor 330.

[000201] In some demonstrative aspects, the scalar processor 330, the vector processing block 310, and/or the DMA 340 may include a state configuration including a set of registers and memories, e.g., as described below.

[000202] In some demonstrative aspects, as shown in Fig. 3, vector processor block 310 may include a set of vector memories 312, which may be configured, for example, to store data to be processed by vector processor block 310.

[000203] In some demonstrative aspects, as shown in Fig. 3, vector processor block 310 may include a set of vector registers 314, which may be configured, for example, to be used in data processing by vector processor block 310.

[000204] In some demonstrative aspects, the scalar processor 330, the vector processing block 310, and/or the DMA 340 may be associated with a set of memory maps.

[000205] In some demonstrative aspects, a memory map may include a set of addresses accessible by a data processing unit, which may load and/or store data from/to registers and memories.

[000206] In some demonstrative aspects, as shown in Fig. 3, the vector processing block 310 may include a plurality of Address Generation Units (AGUs) 320, which may include addresses accessible to them, e.g., in one or more of memories 312.

[000207] In some demonstrative aspects, as shown in Fig. 3, vector processor block 310 may include a plurality of data processing units 316, e.g., as described below.

[000208] In some demonstrative aspects, data processing units 316 may be configured to process commands, e.g., including several numbers at a time. In one example, a command may include 8 numbers. In another example, a command may include 4 numbers, 16 numbers, or any other count of numbers. [000209] In some demonstrative aspects, two or more data processing units 316 may be used simultaneously. In one example, data processing units 316 may process and execute a plurality of different command, e.g., 3 different commands, for example, including 8 numbers, at a throughout of a single cycle.

[000210] In some demonstrative aspects, data processing units 316 may be asymmetrical. For example, first and second data processing units 316 may support different commands. For example, addition may be performed by a first data processing unit 316, and/or multiplication may be performed by a second data processing unit 316. For example, both operations may be performed by one or more additional other data processing units 316.

[000211] In some demonstrative aspects, data processing units 316 may be configured to support arithmetic operations for many combinations of input & output data types.

[000212] In some demonstrative aspects, data processing units 316 may be configured to support one or more operations, which may be less common. For example, processing units 316 may support operations working with a Look Up Table (LUT) of vector processor 300, and/or any other operations.

[000213] In some demonstrative aspects, data processing units 316 may be configured to support efficient computation of non-linear functions, histograms, and/or random data access, e.g., which may be useful to implement algorithms like image scaling, Hough transforms, and/or any other algorithms.

[000214] In some demonstrative aspects, vector memories 312 may include, for example, memory banks having a size of 16K or any other size, which may be accessed at a same cycle.

[000215] In one example, a maximal memory access size may be 64 bits. According to this example, a peak throughput may be 256 bits, e.g., 64x4 = 256. For example, high memory bandwidth may be implemented to utilize computation capabilities of the data processing units 316.

[000216] In one example, two data processing units 316 may support 16 8-bit multiply & accumulate operations (MACs) per cycle. According to this example, the two data processing units 316 may not be useful, for example, in case the input numbers are not fetched at this speed, and/or there isn’t exactly 256 bits of input, e.g., 16x8x2 = 256. [000217] In some demonstrative aspects, AGUs 320 may be configured to perform memory access operations, e.g., loading and storing data from/to vector memories 314.

[000218] In some demonstrative aspects, AGUs 320 may be configured to compute addresses of input and output data items, for example, to handle I/O to utilize the data processing units 316, e.g., in case sheer bandwidth is not enough.

[000219] In some demonstrative aspects, AGUs 320 may be configured to compute the addresses of the input and/or output data items, for example, based on configuration registers written by the scalar processor 330, for example, before a block of vector commands, e.g., a loop, is entered.

[000220] For example, AGUs 320 may be configured to write an image base pointer, a width, a height and/or a stride to the configuration registers, for example, in order to iterate over an image.

[000221] In some demonstrative aspects, AGUs 320 may be configured to handle addressing, e.g., all addressing, for example, to provide a technical solution in which data processing units 316 may not have the burden of incrementing pointers or counters in a loop, and/or the burden to check for end-of-row conditions, e.g., to zero a counter in the loop.

[000222] In some demonstrative aspects, as shown in Fig. 3, AGUs 320 may include 4 AGUs, and, accordingly, four memories 312 may be accessed at a same cycle. In other aspects, any other count of AGUs 32 may be implemented.

[000223] In some demonstrative aspects, AGUs 320 may not be "tied" to memory banks 312. For example, an AGU 320, e.g., each AGU 320, may access a memory bank 312, e.g., every memory bank 312, for example, as long as two or more AGUs 320 do not try to access the same memory bank 312 at the same cycle.

[000224] In some demonstrative aspects, vector registers 314 may be configured to support communication between the data processing units 316 and AGUs 320.

[000225] In one example, a total number of vector registers 314 may be 28, which may be divided into several subsets, e.g., based on their function. For example, a first subset of vector registers 314 may be used for inputs/outputs, e.g., of all data processing units 316 and/or AGUs 320; and/or a second subset of vector registers 314 may not be used for outputs of some operations, e.g., most operations, and may be used for one or more other operations, e.g., to store loop-invariant inputs.

[000226] In some demonstrative aspects, a data processing unit 316, e.g., each data processing unit 316, may have one or more registers to host an output of a last executed operation, e.g., which may be fed as inputs to other data processing units 316. For example, these registers may "bypass" the vector registers 314, and may work faster than writing these outputs to first set of vector registers 314.

[000227] In some demonstrative aspects, fetcher and decoder 350 may be configured to support low-overhead vector loops, e.g., very low overhead vector loops (also referred to as “zero-overhead vector loops”), for example, where there may be no need to check a termination (exit) condition of a vector loop during an execution of the vector loop.

[000228] For example, a termination (exit) condition may be signaled by an AGU 320, for example, when the AGU 320 finishes iterating over a configured memory region.

[000229] For example, fetcher and decoder 350 may quit the loop, for example, when the AGU 320 signals the termination condition.

[000230] For example, the scalar processor 330 may be utilized to configure the loop parameters, e.g., first & last instructions and/or the exit condition.

[000231] In one example, vector loops may be utilized, for example, together with high memory bandwidth and/or cheap addressing, for example, to solve a control and data flow problem, for example, to provide a technical solution to allow the data processing units 316 to process data, e.g., without substantially additional overhead.

[000232] In some demonstrative aspects, scalar processor 330 may be configured to provide one or more functionalities, which may be complementary to those of the vector processing block 310. For example, a large portion, e.g., most, of the work in a vector program may be performed by the data processing units 316. For example, the scalar processor 330 may be utilized, for example, for "gluing" together the various blocks of vector code of the vector program.

[000233] In some demonstrative aspects, scalar processor 330 may be implemented separately from vector processing block 310. In other aspects, scalar processor 330 may be configured to share one or more components and/or functionalities with vector processing block 310.

[000234] In some demonstrative aspects, scalar processor 330 may be configured to perform operations, which may not be suitable for execution on vector processing block 310.

[000235] For example, scalar processor 330 may be utilized to execute 32 bit C programs. For example, scalar processor 330 may be configured to support 1, 2, and/or 4 byte data types of C code, and/or some or all arithmetic operators of C code.

[000236] For example, scalar processor 330 may be configured to provide a technical solution to perform operations that cannot be executed on vector processing block 310, for example, without using a full-blown CPU.

[000237] In some demonstrative aspects, scalar processor 330 may include a scalar data memory 332, e.g., having a size of 16K or any other size, which may be configured to store data, e.g., variables used by the scalar parts of a program.

[000238] For example, scalar processor 330 may store local and/or global variables declared by portable C code, which may be allocated to scalar data memory by a compiler, e.g., compiler 200 (Fig. 2).

[000239] In some demonstrative aspects, as shown in Fig. 3, scalar processor 330 may include, or may be associated with, a set of vector registers 334, which may be used in data processing performed by the scalar processor 330.

[000240] In some demonstrative aspects, scalar processor 330 may be associated with a scalar memory map, which may support scalar processor 330 in accessing substantially all states of vector processor 300. For example, the scalar processor 330 may configure the vector units and/or the DMA channels via the scalar memory map.

[000241] In some demonstrative aspects, scalar processor 330 may not be allowed to access one or more block control registers, which may be used by external processors to run and debug vector programs.

[000242] In some demonstrative aspects, DMA 340 may be configured to communicate with one or more other components of a chip implementing the vector processor 300, for example, via main memory. For example, DMA 340 may be configured to transfer blocks of data, e.g., large, contiguous, blocks of data, for example, to support the scalar processor 330 and/or the vector processing block, which may manipulate data stored in the local memories. For example, a vector program may be able to read data from the main chip memory using DMA 340.

[000243] In some demonstrative aspects, DMA 340 may be configured to communicate with other elements of the chip, for example, via a plurality of DMA channels, e.g., 8 DMA channels or any other count of DMA channels. For example, a DMA channel, e.g., each DMA channel, may be capable of transferring a rectangular patch from the local memories to the main chip memory, or vice versa. In other aspects, the DMA channel may transfer any other type of data block between the local memories and the main chip memory.

[000244] In some demonstrative aspects, a rectangular patch may be defined by a base pointer, a width, a height, and astride.

[000245] For example, at peak throughput, 8 bytes per cycle may be transferred, however, there may be overheads for each patch and/or for each row in a patch.

[000246] In some demonstrative aspects, DMA 340 may be configured to transfer data, for example, in parallel with computations, e.g., via the plurality of DMA channels, for example, as long as executed commands do not access a local memory involved in the transfer.

[000247] In one example, as all channels may access the same memory bus, using several channels to implement a transfer may not save I/O cycles, e.g., compared to the case when a single channel is used. However, the plurality of DMA channels may be utilized to schedule several transfers and execute them in parallel with computations. This may be advantageous, for example, compared to a single channel, which may not allow scheduling a second transfer before completion of the first transfer.

[000248] In some demonstrative aspects, DMA 340 may be associated with a memory map, which may support the DMA channels in accessing vector memories and/or the scalar data. For example, access to the vector memories may be performed in parallel with computations. For example, access to the scalar data may usually not be allowed in parallel, e.g., as the scalar processor 330 may be involved in almost any sensible program, and may likely access it's local variables while the transfer is performed, which may lead to a memory contention with the active DMA channel.

[000249] In some demonstrative aspects, DMA 340 may be configured to provide a technical solution to support parallelization of I/O and computations. For example, a program performing computations may not have to wait for VO, for example, in case these computations may run fast by vector processing block 310.

[000250] In some demonstrative aspects, an external processor, e.g., a CPU, may be configured to initiate execution of a program on vector processor 300. For example, vector processor 300 may remain idle, e.g., as long as program execution is not initiated.

[000251] In some demonstrative aspects, the external processor may be configured to debug the program, e.g., execute a single step at a time, halt when the program reaches breakpoints, and/or inspect contents of registers and memories storing the program variables.

[000252] In some demonstrative aspects, an external memory map may be implemented to support the external processor in controlling the vector processor 300 and/or debugging the program, for example, by writing to control registers of the vector processor 300.

[000253] In some demonstrative aspects, the external memory map may be implemented by a superset of the scalar memory map. For example, this implementation may make all registers and memories defined by the architecture of the vector processor 300 accessible to a debugger back-end running on the external processor.

[000254] In some demonstrative aspects, the vector processor 300 may raise an interrupt signal, for example, when the vector processor 300 terminates a program.

[000255] In some demonstrative aspects, the interrupt signal may be used, for example to implement a driver to maintain a queue of programs scheduled for execution by the vector processor 300, and/or to launch a new program, e.g., by the external processor, for example, upon the completion of a previously executed program.

[000256] Referring back to Fig. 1, in some demonstrative aspects, compiler 160 may be configured to generate the target code 115 based on one or more loops, which may be based, for example, on source code 112, e.g., as described below. [000257] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115 based on one or more loops, which may be configured, for example, according to a loop-execution scheme, e.g., as described below.

[000258] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to support one or more vector processing architectures, for example, VLIW architectures and/or any other architectures, e.g., as described below.

[000259] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to support improvement of one or more types of loop nests, for example, imperfect-loop-nests, e.g., as described below.

[000260] In some demonstrative aspects, a loop nest may include at least an outer loop and an inner loop, e.g., as described below.

[000261] In some demonstrative aspects, the loop nest may include an outer loop, e.g., a most outer loop, an inner loop, e.g., a most inner loop, and one or more nested loops (also referred to as” intermediate nested loops” or “intermediate loops”), which may be nested between the outer loop and the inner loop, e.g., as described below.

[000262] In some demonstrative aspects, the loop nest may include a plurality of loops nested in a plurality of nest levels, e.g., as described below.

[000263] In one example, the plurality of loops may include a first loop, e.g., an outer loop, for example, in a first nest level, and a second loop, e.g., an inner loop, for example, in a second nest level.

[000264] In one example, the plurality of loops may include one or more intermediate loops, for example, in one or more intermediate nest levels, e.g., between the first nest level and the second nest level.

[000265] In one example, the plurality of loops may include three loops in three nest levels. For example, the three loops may include a first loop, e.g., an outermost loop, at a first nest level; a second loop, e.g., an intermediate loop, at a second nest level; and a third loop, e.g., an innermost loop, at a third nest level. For example, the second loop may be nested in the first loop, and the third loop may be nested in the second loop. [000266] In some demonstrative aspects, there may be a need to provide a technical solution to efficiently transform imperfect- loop-nests into perfect loop nests, for example, to improve performance of an executable program, for example, when executed by a processor, for example, a vector processor or any other target processor, e.g., as described below.

[000267] In some demonstrative aspects, a perfect loop-nest may be configured to include a loop nest, in which all compute operations of the loop-nest reside in an innermost loop of the perfect loop-nest.

[000268] For example, outer-loops of the perfect loop-nest may not include any compute instructions.

[000269] For example, all the compute instructions of the perfect loop-nest may be in the most inner loop of the perfect loop-nest.

[000270] In one example, one or more processor architectures may require using prefect loop nests in a program and/or may benefit from using the prefect loop nests in the program.

[000271] In another example, the perfect loop-nests may be amenable to one or more, e.g., many more, loop-optimizations.

[000272] In another example, one or more scheduling schemes, e.g., moduloscheduling, which may be critical optimization for VLIW targets, may not be able to optimize code across loop-levels/basic-blocks, for example, when perfect loop-nests are not used.

[000273] In another example, one or more processor architectures may support only the perfect loop nests. For example, these architectures may rely on being able to perfectize loop-nests into perfect loops.

[000274] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115 based on one or more loops, which may be configured, for example, according to a loop-execution scheme, which may be configured to provide a technical solution to improve performance of a program executed by a target processor, for example, a vector processor, for example, by transforming imperfect-loop-nests into perfect loop nests, e.g., as described below. [000275] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to improve performance of a program executed by a target processor, for example, a vector processor, for example, by efficiently transforming imperfect-loop-nests into collapsed loops, e.g., as described below.

[000276] In some demonstrative aspects, a collapsed loop of a perfect loop nest may be configured to include a single-basic -block loop including all nested loops of the perfect loop, e.g., as described below.

[000277] In some demonstrative aspects, execution of the single-basic-block loop may be configured in advance, for example, along different dimensions, e.g., which may correspond to the original loops in the original loop nest.

[000278] For example, execution of the collapsed loop may be configured in advance, e.g., by control hardware ("HW controlled"), e.g., as described below.

[000279] In one example, one or more processor architectures may support only collapsed loops. For example, these processor architectures may rely on an ability to collapse and/or perfectize loop-nests into single-basic -block and/or perfect loops.

[000280] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115 based on one or more loops, which may be configured, for example, according to a loop-execution scheme, which may be configured to provide a technical solution to support transformation of imperfect-loop-nests into perfect loop nests, for example, to improve performance of an executable program, e.g., as described below.

[000281] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to efficiently transform imperfect-loop-nests into collapsed loops, for example, by transforming perfect loop nests into collapsed loops, e.g., as described below.

[000282] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115 based on a compilation scheme, which may be configured, for example, to provide a technical solution to support computing one or more predicates, which may be utilized for the transformation of imperfect-loop-nests into perfect loop nests and/or for the transformation of imperfect-loop-nests into collapsed loops, e.g., as described below. [000283] In some demonstrative aspects, a predicate may be configured to indicate, identify, affirm, predict, and/or assert a start of a loop and/or an end of a loop, e.g., a first iteration or a last iteration of the loop.

[000284] In some demonstrative aspects, the predicate may be configured to identify the start of a loop and/or the end of a loop, for example, even without handling and/or maintaining an induction variable, e.g., as described below.

[000285] In one example, one or more predicates may be utilized to indicate a start and/or an end of execution of one or more inner-loops nested in an original loop nest, e.g., as described below.

[000286] In one example, it may be important to efficiently compute the predicates, for example, in cases where loop perfectization and/or loop collapsing rely on predication, e.g., as described below.

[000287] In another example, it may be important to efficiently compute the predicates, for example, to support processor architectures, which may not be able to efficiently compute induction- variables, e.g., as described below.

[000288] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to support processor architectures, which may not support predicated instructions for non-memory-access operations.

[000289] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to support computing predicates, for example, to support transformation of loop nests into perfect loop nests and/or collapsed loops, e.g., as described below.

[000290] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to support executing programs more efficiently, for example, while avoiding a need to compute an induction-variable based predicate, e.g., as described below.

[000291] In some demonstrative aspects, the loop-execution scheme may be configured to provide a technical solution to support one or more architectures, which do not have predicated operations in hardware and/or have loop-nests controlled by hardware, e.g., as described below. [000292] In some demonstrative aspects, compiler 160 may be configured to identify one or more loop nests based on source code, e.g., as described below.

[000293] In some demonstrative aspects, compiler 160 may be configured to identify the one or more of the loop nests in source code 112, e.g., in case the loop nests are included in source code 112.

[000294] In some demonstrative aspects, compiler 160 may be configured to identify the one or more of the loop nests in code, e.g., middle-end code, which may be compiled from the source code 112.

[000295] In some demonstrative aspects, compiler 160 may be configured to transform one or more identified loop nests into one or more perfect loop-nests, e.g., as described below.

[000296] In some demonstrative aspects, compiler 160 may be configured to compile the source code into the target code 115, for example, such that target code 115 may be based on the one or more perfect loop-nests, e.g., as described below.

[000297] In some demonstrative aspects, compiler 160 may be configured to transform the one or more identified loop nests into the one or more perfect loop-nests, for example, according to a loop-perfectization scheme, e.g., as described below.

[000298] In some demonstrative aspects, compiler 160 may be configured to move one or more outer instructions from outer loop-levels of a loop nest into an inner loop, e.g., an innermost loop, of the loop nest, for example, while using one or more predicates to guard the execution of the outer instructions that were moved into the inner loop, e.g., as described below.

[000299] In some demonstrative aspects, the predicates may be configured to check and/or represent a state of an induction-variable, which counts a number of iterations of the inner-loop, e.g., as described below.

[000300] In some demonstrative aspects, a predicate (also referred to as a “loop-start predicate”) may be configured to identify when the induction-variable may be equal to the start of a respective inner-loop, e.g., for an instruction (also referred to as “sunk instruction”) moved from before the inner-loop into the inner-loop, e.g., as described below. [000301] In some demonstrative aspects, a predicate (also referred to as a “loop-end predicate”) may be configured to identify when the induction-variable may be equal to the end of a respective inner-loop, e.g., for an instruction (also referred to as “hoist instruction”) moved from after the inner-loop into the inner-loop, e.g., as described below.

[000302] In some demonstrative aspects, compiler 160 may be configured to move all instructions from outer loop-levels (nest levels) of the loop nest into the innermost loop of the loop nest, for example, to transform the loop nest into a perfect loop nest, e.g., as described below.

[000303] In some demonstrative aspects, compiler 160 may be configured to identify one or more outer-loop instructions of an outer- loop of a loop nest, which is outer to an inner loop of the loop nest, e.g., as described below.

[000304] In some demonstrative aspects, compiler 160 may be configured to move an outer-loop instruction into the inner loop of the loop nest, for example, based on a location-based criterion relating to a location of the outer-loop instruction with respect to the inner loop, e.g., as described below.

[000305] In some demonstrative aspects, the location-based criterion may be used to identify whether the outer-loop instruction is before the inner loop (“a pre-header instruction”) or after the inner loop (“a latch instruction”), e.g., as described below.

[000306] In some demonstrative aspects, compiler 160 may be configured to transform an outer-loop instruction into a conditional instruction in the inner loop, which may be within the inner loop of the loop nest, e.g., as described below.

[000307] For example, the conditional instruction may be configured based on the location-based criterion, e.g., as described below.

[000308] In some demonstrative aspects, the conditional instruction may be configured, for example, based on a predicate to indicate, identify, affirm, predict, and/or assert a count of iterations of the inner loop, e.g., as described below.

[000309] In some demonstrative aspects, the predicate may be utilized to indicate, identify, affirm, predict, and/or assert whether the inner loop is at a first iteration of the inner loop or at a last iteration of the inner loop, e.g., as described below. [000310] In some demonstrative aspects, the conditional instruction may be configured as a memory access operation, which may be based, for example, on the predicate on the count of iterations of the inner loop, e.g., as described below.

[000311] In some demonstrative aspects, the outer-loop instruction may include a load operation, and the memory access operation may be configured to perform the load operation, for example, based on the predicate on the count of iterations of the inner loop, e.g., as described below.

[000312] In some demonstrative aspects, the outer-loop instruction may include a store operation, and the memory access operation may be configured to perform the store operation, for example, based on the predicate on the count of iterations of the inner loop, e.g., as described below.

[000313] In some demonstrative aspects, compiler 160 may be configured to sink an instruction, for example, by moving into an inner loop an instruction to be performed before a first iteration of the inner loop, e.g., as described below.

[000314] In some demonstrative aspects, a pre-header instruction may be sunk, for example, by moving the pre-header instruction into the inner loop, and transforming the pre-header instruction into a pre-header conditional instruction, e.g., as described below.

[000315] In some demonstrative aspects, the pre-header conditional instruction may include a condition to configure a result of the pre-header conditional instruction based on a predicate, for example, on the inner loop, e.g., as described below.

[000316] In some demonstrative aspects, compiler 160 may generate the target code 115 based on compiled code, which may be configured to configure a particular result of the pre-header conditional instruction, for example, when the predicate identifies that execution of the inner loop is before the first iteration of the inner loop, e.g., as described below.

[000317] In some demonstrative aspects, compiler 160 may be configured to hoist an instruction, for example, by moving into an inner loop an instruction to be performed after a last iteration of the inner loop, e.g., as described below.

[000318] In some demonstrative aspects, a latch instruction may be hoisted, for example, by moving the latch instruction into the inner loop, and transforming the latch instruction into a latch conditional instruction (also referred to as “hoisted conditional instruction”), e.g., as described below.

[000319] In some demonstrative aspects, the latch conditional instruction may include a condition to configure a result of the latch conditional instruction based on a predicate, for example, on the inner loop, e.g., as described below.

[000320] In some demonstrative aspects, compiler 160 may generate the target code 115 based on compiled code, which may be configured to configure a particular result of the latch conditional instruction, for example, when the predicate identifies that execution of the inner loop is after the last iteration of the inner loop.

[000321] In some demonstrative aspects, compiler 160 may be configured to repeatedly and/or iteratively perform the hoist and/or sink operations, for example, to iterate over all instructions in the loop nest, for example, until substantially all instructions are moved from outer loops into the innermost loops, e.g., as described below.

[000322] For example, the loop-execution scheme may be configured to provide a technical solution to support VMP iterators to work on perfectly loop nests only. For example, the loop-execution scheme may result in a situation where there are no instructions between two subsequent “for” statements that compose the loop, e.g., as described below.

[000323] In some demonstrative aspects, compiler 160 may be configured to compile source code 112 into target code 115, for example, by transforming one or more loop nests into collapsed loops, e.g., as described below.

[000324] In some demonstrative aspects, the one or more loop nests may be transformed into collapsed loops, for example, to provide a technical solution to improve performance of execution of a program by a target processor, for example, a vector processor and/or any other processor, e.g., as described below.

[000325] In some demonstrative aspects, compiler 160 may be configured to transform one or more identified loop nests into collapsed loops, for example, according to a loop-collapsing scheme, e.g., as described below. [000326] In some demonstrative aspects, compiler 160 may be configured to apply the loop-collapsing scheme, for example, based on a result of the loop-perfectization scheme, as described below.

[000327] In some demonstrative aspects, compiler 160 may be configured to apply the loop-collapsing scheme, for example, even without performing the loop- perfectization scheme, for example, when the loop-perfectization scheme is unnecessary, e.g., when an input loop nest includes a perfect loop nest.

[000328] In one example, compiler 160 may be configured to apply the loopcollapsing scheme, for example, to provide target code 115 configured for execution by one or more processor architectures, which support only single-basic-block loops.

[000329] In some demonstrative aspects, compiler 160 may be configured to collapse a plurality of individual loops of a loop nest into a single loop, which may be configured, for example, to execute substantially all the iterations of the original loop nest, e.g., as described below.

[000330] In some demonstrative aspects, execution of the collapsed loop may be configured in advance, for example, to configure advancement along dimensions of the original loop.

[000331] In some demonstrative aspects, the loop-collapsing scheme may be configured to provide a technical solution to support execution of target code 115 by one or more processor architectures, including processor architectures in which computation of induction variables may be computationally expensive.

[000332] In some demonstrative aspects, compiler 160 may be configured to identify a perfect loop nest including a plurality of nested loops, for example, based on the source code 112, e.g., as described below.

[000333] In some demonstrative aspects, the plurality of nested loops may correspond to a respective plurality of dimensions, e.g., as described below.

[000334] In some demonstrative aspects, a dimension of a nested loop may be executed during a number of iterations of the nested loops, e.g., as described below.

[000335] In some demonstrative aspects, compiler 160 may be configured to configure a collapsed loop based on the loop nest, for example, by collapsing the plurality of loop nests into a single loop, for example, based on the plurality of dimensions, e.g., as described below.

[000336] In some demonstrative aspects, compiler 160 may compile a source code 112 of a program to be executed by a target processor, e.g., processor 180, as described below.

[000337] For example, complier 160 may identify, for example, based on source code 112, a loop nest including an outer loop (y loop) along a dimension based on a variable height, a nested loop (x loop) along a dimension based on a variable width, and an inner loop (z loop) along a dimension based on a variable area, e.g., as follows: for(int y = 0; y < height; y++) { char a; for(int x = 0; x < width; x++) { outl[y * width + x] = inpl[y * width + x] + 7; for(int z = 0; z < area; z++) { a = inp2[y * width * area + x * area + z];

}

} out2[y] = a;

}

Example (4)

[000338] For example, as shown by Example 4, the nested loop may include a preheader instruction, e.g., outl[y * width + x] = inpl[y * width + x] + 7, which may reside before the header of the inner loop (z loop).

[000339] For example, as shown by Example 4, the pre-header instruction (putl[y * width + x] = inpl[y * width + x/ + 7) may be executed, for example, before execution of the inner loop begins.

[000340] For example, as shown by Example 4, the pre-header instruction (putl[y * width + x] = inpl[y * width + x/ + 7) may include an outer load operation, e.g., inpl[y * width + x], and an outer store operation, e.g., “outl [y * width + x]= ”, which may be executed, for example, every time before execution of the inner loop begins.

[000341] In some demonstrative aspects, compiler 160 may be configured to sink the pre-header instruction into the inner loop, for example, to generate a perfect loop nest, e.g., as described below.

[000342] For example, compiler 160 may be configured to move the pre-header instruction (oull [y * width + x] = inpl[y * width + x/ + 7) into the inner loop, and to transform the pre-header instruction (oull [y * width + x] = inpl[y * width + x/ + 7) into a conditional pre-header instruction (also referred to as a “sunk conditional instruction”), e.g., as described below.

[000343] For example, as shown by Example 4, the outer loop may include a latch instruction, e.g., out2[y] = a, which may be after the nested loop and the inner loop.

[000344] For example, as shown by Example 4, the latch instruction may include an outer store operation, e.g., out2[y] = a.

[000345] For example, as shown by Example 4, the latch instruction (out2[y] = a) may be executed, for example, every time after the last iteration of the inner-loop and the nested-loop.

[000346] In some demonstrative aspects, compiler 160 may be configured to hoist the latch instruction (out2[y] = a) into the inner loop, for example, to generate a perfect loop nest, e.g., as described below.

[000347] For example, compiler 160 may be configured to move the latch instruction (out2[y] = a) into the inner loop, and to transform the latch instruction (oul2[y] = a) into a conditional latch (also referred to as a “hoisted conditional instruction”) instruction, which may be based on the latch instruction, e.g., as described below.

[000348] For example, as shown by Example 4, the inner loop may include an inner load instruction, e.g., a = inp2[y * width * area + x * area + z], which may be within the inner loop.

[000349] In some demonstrative aspects, compiler 160 may be configured to transform the perfect loop nest into a collapsed loop along a dimension, which may be based, for example, on the value height, on the value width, and on the value area, e.g., as follows: for(int ind = 0; ind < height * width * area; ind++) { char val = inpl [inp l_ind]; char result = val + 7 ; if (first_iteration_of_z_loop) outl[outl_ind] = result; char a = inp2[inp2_ind]; if(last_iteration_of_x_and_z_loops) out2[out2_ind] = a;

}

Example (5)

[000350] In some demonstrative aspects, as shown by Example 5, the collapsed loop may include a single block including instructions based on all the instructions of Example 4.

[000351] In some demonstrative aspects, as shown by Example 5, load and store operations in the pre-header instruction outl[y * width + x] = inpl[y * width + x] + 7, may be transformed into a conditional pre-header instruction “if (first_iteration_of_z_loop) outl [outl_ind] = result”.

[000352] In some demonstrative aspects, as shown by Example 5, the conditional preheader instruction may include a condition based on a predicate, which may indicate, identify, affirm, predict, and/or assert a start of the inner-loop.

[000353] In some demonstrative aspects, as shown by Example 5, the conditional preheader instruction may be executed, for example, only when the predicate (first_iteration_of_z_loop) is true, e.g., only when the inner loop starts to execute.

[000354] In some demonstrative aspects, as shown by Example 5, the latch instruction out2[y] = a may be transformed into a conditional latch instruction “if(last_iteration_of_x_and_z_loops) out2[out2_ind] = a”. [000355] In some demonstrative aspects, as shown by Example 5, the conditional latch instruction may include a condition based on a predicate, which may indicate, identify, affirm, predict, and/or assert the last iteration of the inner-loop and the last iteration of the nested loop.

[000356] In some demonstrative aspects, as shown by Example 5, the conditional latch instruction may be executed, for example, only when the predicate (last_iteration_of_x_and_z_loops) is true, e.g., only when the inner loop and the nested loop are after the last iteration.

[000357] In some demonstrative aspects, as shown by Example 5, the inner load instruction, e.g., a = inp2[y * width * area + x * area + z], may be transformed into a load instruction, e.g., char a = inp2[inp2_ind]. For example, as shown by Example 5, the inner load instruction (a = inp2[y * width * area + x * area + z]) may not be transformed into a conditional instruction.

[000358] In some demonstrative aspects, as shown by Example 5, indices of the load and store instructions may be computed, for example, as AGU parameters.

[000359] In some demonstrative aspects, as shown by Example 5, indices of the conditional latch instruction and/or the conditional pre-header instruction may be computed, for example, as AGU parameters, e.g., as described below.

[000360] In some demonstrative aspects, compiler 160 may be configured to provide a technical solution to support computing predicates, e.g., conditional instructions, which may be utilized for the transformation of outer-loop instructions into instructions of a collapsed loop, e.g., as described below.

[000361] In some demonstrative aspects, for example, in some use cases, implementations, and/or scenarios, transforming the outer-loop instructions using conditional store/load instructions may be inefficient.

[000362] In one example, the conditional store/load instructions may require an additional condition instruction.

[000363] In another example, the conditional store/load instructions may require maintaining one or more induction variables in the loop. [000364] In some demonstrative aspects, compiler 160 may be configured to configure a collapsed loop, e.g., the collapsed loop of Example 5, for example, according to a predicate-based memory-access mechanism, which may be configured to support configuration of memory access operations, for example, based on one or more predicates, e.g., as described below.

[000365] In some demonstrative aspects, the predicate-based memory-access mechanism may be configured to provide a technical solution to support configuration of memory access operations, e.g., load instructions and/or store instructions, based on a start-of-loop predicate and/or an end-of-loop predicate, e.g., as described below.

[000366] In some demonstrative aspects, the predicate-based memory-access mechanism may be configured to provide a technical solution to support memory access operations based on the start-of-loop predicate and/or the end-of-loop predicate, for example, even without computing induction variables of one or more of the loops in the loop nest, e.g., as described below.

[000367] In some demonstrative aspects, the predicate-based memory-access mechanism may be configured to provide a technical solution to support memory access operations based on the start-of-loop predicate and/or the end-of-loop predicate, for example, at processor architectures, which may not support computation of induction variables.

[000368] In some demonstrative aspects, the predicate-based memory-access mechanism may be configured to provide a technical solution to support memory access operations based on the start-of-loop predicate and/or the end-of-loop predicate, for example, at processor architectures, in which it may be computationally expensive to compute induction variables, e.g., when loops are entirely configured in advance.

[000369] In some demonstrative aspects, compiler 160 may be configured to implement the conditional latch instruction (hoisted conditional instruction) and/or the conditional pre-header instruction (sunk conditional instruction) of Example 5, for example, by setting one or more AGU parameters corresponding to the conditional latch instruction and/or the conditional pre-header instruction, e.g., as described below.

[000370] In some demonstrative aspects, complier 160 may be configured to configure an AGU to perform a memory access operation, which may be performed at a start of an inner loop, or at an end of an inner loop, for example, based on an outer loop instruction, e.g., as described below.

[000371] In some demonstrative aspects, complier 160 may be configured to identify a loop nest based on a source code 112 to be compiled into a target code 150 to be executed by a target processor 180, e.g., as described below.

[000372] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115 configured, for example, for execution by a target vector processor, for example, a vector processor 180, e.g., as described below.

[000373] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115 configured, for example, for execution by a Very Long Instruction Word (VLIW) Single Instruction/Multiple Data (SIMD) target processor, e.g., processor 180.

[000374] In other aspects, compiler 160 may be configured to generate the target code 115 configured, for example, for execution by any other suitable type of processor.

[000375] In some demonstrative aspects, compiler 160 may be configured to generate the target code 115, for example, based on the source code 112 including Open Computing Language (OpenCL) code.

[000376] In other aspects, compiler 160 may be configured to generate the target code 115, for example, based on the source code 112 including any other suitable type of code.

[000377] In some demonstrative aspects, compiler 160 may be configured to compile the source code 112 into the target code 115, for example, according to a Low Level Virtual Machine (LLVM) based (LLVM-based) compilation scheme.

[000378] In other aspects, compiler 160 may be configured to compile the source code 112 into the target code 115 according to any other suitable compilation scheme.

[000379] In some demonstrative aspects, compiler 160 may be configured to identify the loop nest in source code 112, e.g., in case the loop nest is included in source code 112. [000380] In some demonstrative aspects, compiler 160 may be configured to identify the loop nest in code, e.g., middle-end code or any other code, which may be compiled from the source code 112.

[000381] In some demonstrative aspects, the loop nest may include a plurality of loops, for example, including at least an outer loop and an inner loop inside the outer loop, e.g., as described below.

[000382] In some demonstrative aspects, the plurality of loops may include at least a first loop, e.g., an outer loop, and a second loop, e.g., an inner loop, nested in the first loop, e.g., as described below.

[000383] In some demonstrative aspects, the first loop may include at least one first- loop instruction (“outer loop instruction”), which may be outside the second loop, e.g., as described below.

[000384] In some demonstrative aspects, complier 160 may be configured to generate AGU configuration code, for example, to configure an AGU of the target processor 180, for example, based on the first-loop instruction, e.g., as described below.

[000385] In some demonstrative aspects, the AGU configuration code may be configured to configure a first dimension of the AGU, for example, based on the first loop, e.g., as described below.

[000386] In some demonstrative aspects, the AGU configuration code may be configured to configure a second dimension of the AGU, for example, based on the second loop, e.g., as described below.

[000387] In some demonstrative aspects, the AGU configuration code may be configured to configure the second dimension of the AGU, for example, to configure a memory-access operation to be performed at a start of the second loop, or at an end of the second loop, e.g., as described below.

[000388] In some demonstrative aspects, the memory-access operation may be based, for example, on the first-loop instruction, e.g., as described below.

[000389] In some demonstrative aspects, the memory-access operation may include a load operation or a store operation, e.g., as described below. [000390] In some demonstrative aspects, compiler 160 may be configured to generate target code 115, for example, based on compilation of the source code 112, e.g., as described below.

[000391] In some demonstrative aspects, the target code 115 may be based, for example, on the AGU configuration code, e.g., as described below.

[000392] In some demonstrative aspects, the plurality of loops may include a third loop, which may be, for example, nested in the first loop, e.g., as described below.

[000393] In some demonstrative aspects, the second loop may be nested in the third loop, e.g., as described below.

[000394] In some demonstrative aspects, the first-loop instruction may be outside the third loop, e.g., as described below.

[000395] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure a third dimension of the AGU, for example, based on the third loop, e.g., as described below.

[000396] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure the third dimension, for example, to configure the memory-access operation to be performed at the start of the second loop or at the end of the second loop, e.g., as described below.

[000397] In some demonstrative aspects, the third loop may include a third-loop instruction, which may be, for example, outside the second loop, e.g., as described below.

[000398] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure an other AGU of the target processor, for example, based on the third-loop instruction, e.g., as described below.

[000399] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure a first dimension of the other AGU, for example, based on the third loop, e.g., as described below.

[000400] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure a second dimension of the other AGU, for example, based on the second loop, e.g., as described below. [000401] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure the second dimension of the other AGU, for example, to configure an other memory-access operation to be performed at the start of the second loop or at the end of the second loop, e.g., as described below.

[000402] In some demonstrative aspects, the other memory-access operation may be based, for example, on the third-loop instruction, e.g., as described below.

[000403] In some demonstrative aspects, complier 160 may be configured to transform the loop nest into a transformed loop including the memory-access operation, e.g., as described below.

[000404] In some demonstrative aspects, the target code 115 may be based, for example, on the transformed loop, e.g., as described below.

[000405] In some demonstrative aspects, the transformed loop may include the memory-access operation and the other memory access operation, e.g., as described below.

[000406] In some demonstrative aspects, the transformed loop may include a perfect flat loop, in which, for example, all compute operations of the loop nest are implemented in the transformed loop, e.g., as described below.

[000407] In some demonstrative aspects, the transformed loop may include a fully collapsed loop including, for example, only a single-basic-block loop, for example, based on the plurality of loops, e.g., as described below.

[000408] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a base parameter of the AGU, for example, based on a memory pointer of the first-loop instruction, e.g., as described below.

[000409] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a Maximum (Max) parameter of the second dimension of the AGU, for example, based on an entry size corresponding to the first-loop instruction, e.g., as described below. [000410] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Max parameter of the second dimension of the AGU, and to set a Max parameter of the third dimension of the AGU, for example, based on the entry size corresponding to the first-loop instruction, for example, in case the second loop is nested in the third loop, e.g., as described below.

[000411] In some demonstrative aspects, the at least one first-loop instruction may include a pre-header instruction to be performed before a first iteration of the inner loop, e.g., as described below.

[000412] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure the second dimension of the AGU, for example, to configure the memory-access operation to be performed only at the start of the second loop, for example, when the first-loop instruction includes the pre-header instruction, e.g., as described below.

[000413] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the base parameter of the AGU to the memory pointer of the pre-header instruction, for example, when the first-loop instruction includes the pre-header instruction, e.g., as described below.

[000414] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a Minimum (Min) parameter of the second dimension of the AGU to zero, for example, when the first-loop instruction includes the pre-header instruction, e.g., as described below.

[000415] In some demonstrative aspects, the pre-header instruction may include a load operation, e.g., as described below.

[000416] In some demonstrative aspects, complier 160 may be configured to, for example, based on a determination that the pre-header instruction includes a load operation, configure the AGU configuration code to set a step parameter of the second dimension of the AGU to zero, e.g., as described below.

[000417] In some demonstrative aspects, the pre-header instruction may include a store operation, e.g., as described below.

[000418] In some demonstrative aspects, complier 160 may be configured to, for example, based on a determination that the pre-header instruction includes a store operation, configure the AGU configuration code to set the step parameter of the second dimension of the AGU, for example, based on an entry size corresponding to the preheader instruction, e.g., as described below.

[000419] In some demonstrative aspects, the at least one first-loop instruction may include a latch instruction to be performed after a last iteration of the second-loop, e.g., as described below.

[000420] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure the second dimension of the AGU, for example, to configure the memory-access operation to be performed only at the end of the second loop, for example, when the first-loop instruction includes the latch instruction, e.g., as described below.

[000421] In some demonstrative aspects, the latch instruction may include a load operation, e.g., as described below.

[000422] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the base parameter of the second dimension of the AGU, for example, to the memory pointer of the latch instruction, for example, when the latch instruction includes the load operation, e.g., as described below.

[000423] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a Min parameter of the second dimension of the AGU to zero, for example, when the latch instruction includes the load operation, e.g., as described below.

[000424] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Max parameter of the second dimension of the AGU, for example, to an entry size corresponding to the latch instruction, for example, when the latch instruction includes the load operation, e.g., as described below.

[000425] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the step parameter of the second dimension of the AGU to zero, for example, when the latch instruction includes the load operation, e.g., as described below. [000426] In some demonstrative aspects, the latch instruction may include a store operation, e.g., as described below.

[000427] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the base parameter of the AGU, for example, based on a first parameter value, a second parameter value, and a third parameter value, for example, when the latch instruction includes the store operation, e.g., as described below.

[000428] In some demonstrative aspects, the first parameter value may include the entry size corresponding to the latch instruction, e.g., as described below.

[000429] In some demonstrative aspects, the second parameter value may include a total count of iterations over one or more loops, which are in the first loop and include the second loop, e.g., as described below.

[000430] In some demonstrative aspects, the third parameter value may include a count of dimensions of the AGU corresponding to the one or more loops, e.g., as described below.

[000431] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the base parameter, denoted Base, of the AGU, e.g., as follows:

Base = OrigBase + EntrySize * ([ E TripCount(E)]- #InnerDims), wherein OrigBase denotes a memory pointer of the latch instruction, wherein EntrySize denotes the entry size corresponding to the latch instruction, wherein [E TripC ount(L)] denotes the total count of iterations over the one or more loops, which are in the first loop and include the second loop, and wherein #InnerDims denotes the count of dimensions of the AGU corresponding to the one or more loops, e.g., as described below.

[000432] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the step parameter of the second dimension of the AGU, for example, based on the entry size corresponding to the latch instruction, for example, when the latch instruction includes the store operation, e.g., as described below.

[000433] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Min parameter of the second dimension of the AGU, for example, based on the entry size corresponding to the latch instruction, and a count of iterations in the second loop, for example, when the latch instruction includes the store operation, e.g., as described below.

[000434] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Max parameter of the second dimension of the AGU, for example, based on the Min parameter of the second dimension of the AGU and the entry size corresponding to the latch instruction, for example, when the latch instruction includes the store operation, e.g., as described below.

[000435] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the step parameter of the second dimension of the AGU, for example, based on an additive inverse of the entry size corresponding to the latch instruction, for example, when the latch instruction includes the store operation, e.g., as described below.

[000436] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Min parameter of the second dimension of the AGU, for example, based on the entry size corresponding to the latch instruction, and the count of iterations in the second loop, for example, when the latch instruction includes the store operation, e.g., as described below.

[000437] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Min parameter of the second dimension of the AGU, for example, based on a product of the additive inverse of the entry size corresponding to the latch instruction, and a subtraction result of subtracting one from the count of iterations in the second loop, for example, when the latch instruction includes the store operation, e.g., as described below.

[000438] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Max parameter of the second dimension of the AGU, for example, based on the Min parameter of the second dimension of the AGU and the entry size corresponding to the latch instruction, for example, when the latch instruction includes the store operation, e.g., as described below.

[000439] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set the Max parameter of the second dimension of the AGU, for example, based on a sum of the Min parameter of the second dimension of the AGU and the entry size corresponding to the latch instruction, for example, when the latch instruction includes the store operation, e.g., as described below.

[000440] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, based on a determination that the plurality of loops includes a third loop nested in the first loop, that the second loop is nested in the third loop, and that the latch instruction including the store operation is outside the third loop, e.g., as described below.

[000441] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure the third dimension of the AGU, for example, based on the third loop, for example, when the latch instruction including the store operation is outside the third loop, e.g., as described below.

[000442] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a step parameter of the third dimension of the AGU, for example, based on the entry size corresponding to the latch instruction, for example, when the latch instruction including the store operation is outside the third loop, e.g., as described below.

[000443] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a Min parameter of the third dimension of the AGU, for example, based on the entry size corresponding to the latch instruction, and a count of iterations in the third loop, for example, when the latch instruction including the store operation is outside the third loop, e.g., as described below.

[000444] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to set a Max parameter of the third dimension of the AGU, for example, based on the Min parameter of the third dimension of the AGU, and the entry size corresponding to the latch instruction, for example, when the latch instruction including the store operation is outside the third loop, e.g., as described below.

[000445] In some demonstrative aspects, compiler 160 may be configured to perform one or more operations, for example, according to a loop-compilation scheme, which may be configured to compile instructions for a loop, e.g., as described below.

[000446] In some demonstrative aspects, compiler 160 may be configured to identify load and/or store operations, and to analyze one or more attributes of a load/store operation, e.g., for each load and/or store operation, e.g., as described below.

[000447] In some demonstrative aspects, compiler 160 may be configured to analyze for a load/store operation, e.g., for each load and/or store operation, its innermost enclosing loop, its base parameter, its step parameter, e.g., stride, its offset, its bounds, e.g., Min/Max parameters, and/or its passthrough value.

[000448] In some demonstrative aspects, compiler 160 may be configured to partition the identified load and/or store operations, for example, into one or more groups, for example, according to their attributes, e.g., according to their stride, bounds, and/or passthrough values.

[000449] In some demonstrative aspects, compiler 160 may be configured to assign an AGU to a group of identified load/store operations, e.g., to each group of identified load/store operations.

[000450] In one example, each group of identified load/store operations may be implemented with a single AGU.

[000451] In some demonstrative aspects, compiler 160 may be configured to configure AGU configuration code for AGUs implementing load and/or store operations, which may be located outside an innermost loop.

[000452] For example, the AGU configuration code may set one or more special parameters, e.g., including a special base parameter, a special stride parameter, and/or special Min/Max parameters, for a load and/or store operation in an outer loop, e.g., as described below. [000453] In some demonstrative aspects, compiler 160 may be configured to sink or hoist an outer load operation, which may be outside an inner loop.

[000454] In some demonstrative aspects, compiler 160 may be configured to generate AGU configuration code, for example, to configure an AGU based on the outer load operation.

[000455] In some demonstrative aspects, the AGU configuration code may configure a load operation based on the outer load operation, which may be performed at a start or an end of the inner- loop.

[000456] In some demonstrative aspects, compiler 160 may be configured to generate the AGU configuration code, for example, to configure the AGU for execution of the load operation in the first iteration of the inner- loop or in the last operation of the inner- loop.

[000457] In some demonstrative aspects, compiler 160 may be configured to set a base parameter of the AGU, for example, to a memory pointer of the outer load operation, e.g., similar to a usual setting of the base parameter.

[000458] In some demonstrative aspects, compiler 160 may be configured to set to zero the step parameter of the AGU for a dimension corresponding to an inner loop, denoted L, for example, for each inner loop, e.g., as follows:

For each inner loop L: StepL = 0

[000459] For example, the setting of StepL = 0 may be applied with respect to load operations.

[000460] In some demonstrative aspects, compiler 160 may be configured to set the minimum parameter of the AGU for a dimension corresponding to the inner loop L to zero, and to set the maximum parameter of the AGU for the dimension corresponding to the inner loop L, for example, based on an entry size of the outer load operation, e.g., as follows:

For each inner loop L: Mine = 0, MOXL = Entry Size

[000461] In some demonstrative aspects, compiler 160 may be configured, for example, to sink the load operation inpl[y * width + x] oi the pre-header instruction of Example 4, for example, into the inner loop of Example 4. [000462] In some demonstrative aspects, compiler 160 may be configured to identify an innermost enclosing loop of the outer load operation of Example 4.

[000463] For example, compiler 160 may identify the loop over x (X-loop) as the innermost enclosing loop for the load operation inpl[y * width + x], for example, while the loop over z (Z-loop) may be more inner then the loop of the load operation inpl[y * width + x].

[000464] In some demonstrative aspects, compiler 160 may be configured to identify an entry size of the memory pointer inpl . For example, the entry size of the memory pointer inpl may be 1, for example, as the memory pointer inpl may be configured to include a Character (Char).

[000465] In some demonstrative aspects, compiler 160 may be configured to transform the outer load operation inpl[y * width + x], for example, into a load instruction, e.g., char val = inpl [inpl _ind], for example, by configuring AGU configuration code for a dimension corresponding to the inner loop, e.g., the dimension corresponding to the Z-loop, of an AGU implementing the memory pointer inpl, e.g., as follows:

Base = inpl

Step(Z-Eoop) = 0

Min(Z-Eoop) = 0

Max(Z-Eoop) = sizeof(char) = 1

[000466] In some demonstrative aspects, compiler 160 may be configured to sink an outer store operation, which may be outside of an inner loop, e.g., as described below.

[000467] In some demonstrative aspects, compiler 160 may be configured to generate AGU configuration code, for example, to configure an AGU based on the sinking of the outer store operation.

[000468] In some demonstrative aspects, the AGU configuration code may configure a store operation based on the sinking of the outer store operation, which may be executed at a start of the inner-loop. For example, the store operation may be executed at the first iteration of the inner- loop. [000469] In some demonstrative aspects, compiler 160 may be configured to set the base parameter of the AGU, for example, to a memory pointer of the outer store operation, e.g., similar to a usual setting of the base parameter.

[000470] In some demonstrative aspects, compiler 160 may be configured to set the step parameter of the AGU for a dimension corresponding to an inner loop, denoted L, for example, for each inner loop, for example, based on an entry size of the outer store operation, e.g., as follows:

For each inner loop L: StepL = Entry Size

[000471] In some demonstrative aspects, compiler 160 may be configured to set the minimum parameter of the AGU for the dimension corresponding to the inner loop L, e.g., to zero, and to set the maximum parameter of the AGU for the dimension corresponding to the inner loop L, for example, based on an entry size of the outer store operation, e.g., as follows:

For each inner loop L: Mine = 0, MOXL = EntrySize

[000472] In some demonstrative aspects, compiler 160 may be configured to sink the outer store operation “outl[y * width + x]=.. ” oi the prereader instruction of Example 4, for example, into the inner loop of Example 4.

[000473] In some demonstrative aspects, compiler 160 may be configured to identify an innermost enclosing loop of the outer store operation of Example 4.

[000474] For example, compiler 160 may identify the X-loop as the innermost enclosing loop for the store operation “outl[y * width + x]=.. ”, for example, while the Z-loop may be more inner then the loop of the store operation “outl[y * width + x]=.. ” .

[000475] In some demonstrative aspects, compiler 160 may be configured to identify an entry size of the memory pointer outl .

[000476] For example, the entry size of the memory pointer outl may be 1, for example, as the memory pointer outl may be configured to include a Char.

[000477] In some demonstrative aspects, compiler 160 may be configured to transform the outer store operation “outl[y * width + x]=.. ”, for example, into a store instruction, e.g., “if (first_iteration_of_z_loop) outl [outl _ind] = result”, for example, by configuring AGU configuration code for a dimension corresponding to the inner- loop, e.g., the dimension corresponding to the Z-loop, of an AGU implementing the memory pointer outl, e.g., as follows:

Base = outl

Step(Z-Loop) = sizeofichar) = 1

Min(Z-Eoop) = 0

Max(Z-Eoop) = sizeoflchar) = 1

[000478] In some demonstrative aspects, compiler 160 may be configured to hoist an outer store operation, which may be outside of an inner loop, e.g., as described below.

[000479] In some demonstrative aspects, compiler 160 may be configured to generate AGU configuration code, for example, to configure an AGU based on the hoisting of the outer store operation.

[000480] In some demonstrative aspects, the AGU configuration code may configure, for example, a store operation based on hoisting of the outer store operation, which may be executed at an end of the inner- loop. For example, the store operation may be executed at the last iteration of the inner- loop.

[000481] In some demonstrative aspects, compiler 160 may be configured to set a base parameter of the AGU, for example, based on an entry size corresponding to the outer store operation, e.g., as follows:

Set Base = OrigBase + EntrySize * ([ E TripCount(L)]- #InnerDims J wherein OrigBase denotes the base of the outer store operation, wherein TripCount(L) denotes a number of iterations for the loop L, and wherein #InnerDims denotes a number (total count) of dimensions of the AGU, which correspond to loops that are more inner than the loop of the outer store operation. For example, the summation over loop L may be over all loops L, which are more inner than the loop of the outer store operation being hoisted.

[000482] In some demonstrative aspects, compiler 160 may be configured to set the step parameter of the AGU, e.g., for dimensions corresponding to the inner loops, for example, based on the entry size of the outer store operation, e.g., as follows:

Set inner loops Step = -EntrySize [000483] In some demonstrative aspects, compiler 160 may be configured to set the minimum parameter of the AGU, e.g., for a dimension corresponding to an inner loop L, for example, for each inner loop, and to set the maximum parameter of the AGU for the dimension corresponding to the inner loop L, for example, based on the entry size of the outer store operation, e.g., as follows:

Set for inner loop L: Mini = -EntrySize * (TripCount(L) - 1)

MOXL = Min + EntrySize

[000484] In some demonstrative aspects, compiler 160 may be configured to hoist the store operation “out2[y] = a” of the latch instruction of Example 4, for example, into the inner loop of Example 4.

[000485] In some demonstrative aspects, compiler 160 may be configured to identify an innermost enclosing loop of the outer store operation of Example 4.

[000486] For example, compiler 160 may identify the loop over y (Y-loop) as the innermost enclosing loop for the store operation “out2[y] = a”, for example, while the X-loop and the Z-loop may be more inner then the loop of the store operation “out2[y] = a”.

[000487] In some demonstrative aspects, compiler 160 may be configured to identify an entry size of the memory pointer out2.

[000488] For example, the entry size of the memory pointer out2 may be 1, for example, as the memory pointer out2 may be configured to include a Char.

[000489] In some demonstrative aspects, compiler 160 may be configured to identify a count of iterations (trip count) of the inner X-loop, and a count of iterations (tripcount) of the inner Z-loop, e.g., as follows:

TripCount(X-Eoop) = width, TripCount(Z-Eoop) = area

[000490] In some demonstrative aspects, compiler 160 may be configured to transform the outer store operation “out2[y] = a”, for example, into a conditional store instruction, e.g., “if(last_iteration_of_x_and_z_loops) out2[out2_ind] = a”, for example, by configuring AGU configuration code for dimensions of the inner loops, e.g., the dimension corresponding to the X-loop and the dimension corresponding to the Z-loop, of an AGU implementing the memory pointer out2, e.g., as follows: Base = out2 + ( rea + width - 2)

Step(Z-Loop) = -sizeof(int) = -1

Min(Z-Loop) = -1 * (area - 1)

Max(Z-Loop) = -1 * (area - 1) + 1

Step(X-Loop) = -sizeof(int) = -1

Min(X-Loop ) = -! * ( idth - 1)

Max(X-Loop ) = -! * ( width - 1) + 1

Set Step, Min, Max of Y-Loop and TripCounts of all the loops, e.g., as usual:

Step(Y-Loop) = 1, Min(Y-Loop) = 0, Max(Y-Loop) = height * 1, TripCount(Y-Loop) = height, TripCount(X-Loop) = width, TripCount(Z-Loop) = area.

[000491] In some demonstrative aspects, compiler 160 may configure AGU configuration code, for example, to configure a plurality of AGUs based on code of Example 4, e.g., as follows: agul = allocate _agu(“ load” ); set_base(agul, inpl); set_y_minmax(agul, 0, height * width); set_y_count_stride(agu 1 , height, width); set_x_minmax(agul , 0, width); set_x_count_stride(agul, width, 1); set_z_minmax(agul, 0, 1); set_z_count_stride(agul , area, 0); agu2 = allocate _agu(“ load” ); set_base( agu2, inp2 ); set_y_minmax(agu2, 0, height * width * area); set_y_count_stride( agu2, height, width * area ); set_x_minmax( agu2, 0, width * area ); set_x_count_stride(agu2, width, area); set_z_minmax(agu2, 0, area); set_z_count_stride( agu2, area, 1 ); agu3 = allocate _agu(“ store” ); set_base( agu3, outl ); set_y_minmax(agu3, 0, height * width); set_y_count_stride(agu3, height, width); set_x_minmax(agu3, 0, width); set_x_count_stride( agu3, width, 1 ); set_z_minmax( agu3, 0, 1 ); set_z_count_stride( agu3, area, 1 ); agu4 = allocate _agu(“ store” ); set_base( agu4, out2 + ( area + width - 2)); set_y_minmax(agu4, 0, height); set_y_count_stride( agu4, height, 1 ); set_x_minmax(agu4, 1 - width, 2 - width); set_x_count_stride(agu4, width, -1); set_z_minmax( agu4, 1 - area, 2 - area ); set_z_count_stride(agu4, area, -1);

Example (6a)

[000492] In some demonstrative aspects, compiler 160 may configure loop code to execute the loop of Example 4, for example, based on the AGU configuration code of Example 6a, e.g., as follows: Loop: val = agul.load() + 7; agu3.store(val); a = agu2.load(); agu4.store(a); br(Loop);

Example (6b)

[000493] In some demonstrative aspects, as shown by Example 6a, compiler 160 may assign a first AGU, e.g., agul, to perform the load operation “val = agul.load() ” , for example, based on the outer load operation “inpl[y * width + x]” of Example 4.

[000494] In some demonstrative aspects, as shown by Example 6a, compiler 160 may assign a second AGU, e.g., agu2, to perform the load operation “a = agu2.load() ” , for example, based on the inner load operation “a = inp2[y * width * area + x * area + z]” of Example 4.

[000495] In some demonstrative aspects, as shown by Example 6a, compiler 160 may assign a third AGU, e.g., agu3, to perform the store operation “agu3.store(yal); ” , for example, based on the outer store operation “outl[y * width + x] = ... ;” of Example 4.

[000496] In some demonstrative aspects, as shown by Example 6a, compiler 160 may assign a fourth AGU, e.g., agu4, to perform a store operation “agu4.store(a)” , for example, based on the outer store operation “out2[y] = a” of Example 4.

[000497] In some demonstrative aspects, as shown by Example 6a, compiler 160 may configure AGU configuration code to configure agul .

[000498] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agul may be configured to set a base parameter of the first AGU to the memory pointer agul.

[000499] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agul may be configured to set a Min parameter for the dimension z of the first AGU to zero, and to set the Max parameter for the dimension z of the first AGU to 1. [000500] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agul may be configured to set a count of iterations for the dimension z of the first AGU to the value area, and to set the stride (step) for the dimension z of the first AGU to zero.

[000501] For example, these settings for the dimension z of the first AGU may configure the load operation val = agul.load() to be performed, for example, at the start of the inner loop z.

[000502] In some demonstrative aspects, as shown by Example 6a, compiler 160 may configure AGU configuration code to configure agu3.

[000503] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu3 may be configured to set a base parameter of the third AGU to the memory pointer outl.

[000504] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu3 may be configured to set a Min parameter for the dimension z of the third AGU to zero, and to set the Max parameter for the dimension z of the third AGU to 1.

[000505] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agul may be configured to set a count of iterations for the dimension z of the third AGU to the value area, and to set the stride (step) for the dimension z of the third AGU to 1.

[000506] For example, these settings for the dimension z of the first AGU may configure the load operation agu3.store(yal) to be performed, for example, at the start of the inner loop z.

[000507] In some demonstrative aspects, as shown by Example 6a, compiler 160 may configure AGU configuration code to configure agu4.

[000508] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a base parameter of the fourth AGU based, for example, on the memory pointer out2 and a count of iterations of the inner loops, e.g., the value area and the value width, e.g., as follows: Set Base (agu4)= OrigBase + EntrySize * ([ X TripCount(E)]- #InnerDims)=out2+l *([TripCount(Z-loop)+TripCount(X-loop)]- 2)=out2-\-area-\-width-2=out2-\-l *([area-\-width]-2)=out2-\-area-\-width-2.

[000509] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Min parameter for the dimension x of the fourth AGU, e.g., as follows:

Min x = -EntrySize * (TripCount(X-loop) - 1 )=(-l)*(width-l )=l-width

[000510] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Max parameter for the dimension x of the fourth AGU, e.g., as follows:

Max x = Min x + EntrySize= 1 -width+ 1=2 -width

[000511] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Stride (Step) parameter for the dimension x of the fourth AGU, e.g., as follows:

Step x = -EntrySize=-l

[000512] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Count parameter for the dimension x of the fourth AGU to width, e.g., according to the count of iterations of the X-loop.

[000513] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Min parameter for the dimension z of the fourth AGU, e.g., as follows:

Min z = -EntrySize * (TripCount(Z-loop) - 1 )=(-! )*(area-l )=l-area

[000514] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Max parameter for the dimension z of the fourth AGU, e.g., as follows:

Max- = Min- + EntrySize=l-area+l=2-area [000515] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set a Stride (Step) parameter for the dimension z of the fourth AGU, e.g., as follows:

Step- = -EntrySize=-l

[000516] In some demonstrative aspects, as shown by Example 6a, the AGU configuration code to configure agu4 may be configured to set g a Count parameter for the dimension z of the fourth AGU to area, e.g., according to the count of iterations of the Z-loop.

[000517] In one example, compiler 160 may process source code 112, which may be based on Example 4, for example, with a setting of width=3, and a setting of area=5, e.g., as follows: for(int y = 0; y < height; y++) { char c = 0; for( int x = 0; x < 3; x++) { for( int z = 0; z < 5; z++) { c I = inp[y * width * area + x * area + z ];

}

} out[y] = c;

}

Example (7)

[000518] In some demonstrative aspects, as shown by Example 7, an outer-most loop (Y-loop) may include an outer store instruction, e.g., out[y] = c, which may be after a nested loop (X-loop) and an inner loop (Z-loop).

[000519] In some demonstrative aspects, the outer store instruction may be executed after 3 iterations of the X-loop, for example, wherein each iteration of the X-loop includes 5 iterations of the Z-loop. For example, the outer store instruction may be executed after a total of 15 iterations, e.g., 3*5=15. [000520] In some demonstrative aspects, compiler 160 may be configured to generate AGU configuration code, for example, to configure an AGU based on the outer store instruction of Example 7.

[000521] In some demonstrative aspects, the AGU configuration code may set parameters, for example, for the x dimension and the z dimension of the AGU, e.g., including the base parameter, the total count of iterations, the step parameter, the Min parameter, and the Max parameter, for example, based on the outer store instruction out[y] = c, e.g., as described above.

[000522] Reference is made to Fig. 4, which schematically illustrates an execution scheme 400 to execute a latch store operation in a loop nest, in accordance with some demonstrative aspects.

[000523] In some demonstrative aspects, execution scheme 400 may demonstrate the setting of the AGU to configure execution of the outer store operation out[y] = c of Example 7.

[000524] In some demonstrative aspects, as shown in Fig. 4, execution scheme 400 may include three execution steps, for example, corresponding to the three iterations of the X-loop of Example 7.

[000525] In some demonstrative aspects, as shown in Fig. 4, execution scheme 400 may include a first execution step 410 corresponding to a first iteration of the X-loop, e.g., x=0.

[000526] In some demonstrative aspects, as shown in Fig. 4, execution scheme 400 may include a second execution step 420 corresponding to a second iteration of the X- loop, e.g., x=7.

[000527] In some demonstrative aspects, as shown in Fig. 4, execution scheme 400 may include a third execution step 430 corresponding to a third iteration of the X-loop, e.g., x=2.

[000528] In some demonstrative aspects, as shown in Fig. 4the Z-loop may perform 5 iterations, for example, in each execution step of the execution scheme 400.

[000529] In some demonstrative aspects, as shown in Fig. 4, the AGU configuration code may be configured based on Example 7, for example, to set the Min parameter to zero and the Max parameter to one, e.g., for each of the x dimension and the z dimension of the AGU.

[000530] In some demonstrative aspects, as shown in Fig. 4, the AGU configuration code may be configured based on Example 7, for example, to set the base parameter of the AGU to a memory pointer 6.

[000531] In some demonstrative aspects, as shown in Fig. 4, during the first execution step 410, a first iteration of the Z-loop may begin at a memory pointer 7 and a last iteration of the Z-loop may be at a memory pointer 2. Accordingly, the store operation, which may be bounded by the Min parameter of zero and the Max parameter of one, may not be executed.

[000532] In some demonstrative aspects, as shown in Fig. 4, during the second execution step 420, a first iteration of the Z-loop may begin at memory pointer 6 and the last iteration of the Z-loop may be at memory pointer 1. Accordingly, the store operation, which may be bounded by the Min parameter of zero and the Max parameter of one, may not be executed.

[000533] In some demonstrative aspects, as shown in Fig. 4, during the third execution step 430, a first iteration of the Z-loop may begin at memory pointer 5 and the last iteration of the Z-loop may be at memory pointer 0. Accordingly, the store operation, which may be bounded by the Min parameter of zero and the Max parameter of one, may be executed, for example, only at the last iteration of the Z-loop in the last iteration of the X-loop.

[000534] Reference is made to Fig. 5, which schematically illustrates an execution scheme 500 to execute a pre-header load or store operation in a loop nest, in accordance with some demonstrative aspects.

[000535] In some demonstrative aspects, execution scheme 500 may demonstrate the execution of the pre-header load or store instruction, for example, according to the loop execution scheme.

[000536] In some demonstrative aspects, the pre-header load or store instruction may be within an outer loop, which may be outer to an inner loop.

[000537] In some demonstrative aspects, as shown in Fig. 5, the AGU configuration code corresponding to an AGU to perform the pre-header load or store operation may set, for example the Min parameter to zero and the Max parameter to one, for example, for a dimension of the AGU corresponding to the pre-header load or store instruction.

[000538] In some demonstrative aspects, as shown in Fig. 5, the AGU configuration code corresponding to the AGU to perform the pre-header load or store operation may set the base parameter of the AGU, for example, to the memory pointer of the outer load or store pre-header instruction.

[000539] In some demonstrative aspects, as shown in Fig. 5, the AGU configuration code may be configured to provide a technical solution to ensure that pre-header load or store operation is to be executed only once at a beginning of the inner loop.

[000540] For example, the setting of the Min parameter and the Max parameter, which may bound the execution of the load or store operation only to the first iteration of the inner loop, may ensure that the pre-header load or store operation is to be executed only once at the beginning of the inner loop, e.g., as described above.

[000541] Reference is made to Fig. 6, which schematically illustrates a method of compiling code for a processor. For example, one or more operations of the method of Fig. 6 may be performed by a system, e.g., system 100 (Fig. 1); a device, e.g., device 102 (Fig. 1); a server, e.g., server 170 (Fig. 1); and/or a compiler, e.g., compiler 160 (Fig. 1), and/or compiler 200 (Fig. 2).

[000542] In some demonstrative aspects, as indicated at block 602, the method may include identifying a loop nest based on a source code to be compiled into a target code to be executed by a target processor. For example, the loop nest may include a plurality of loops, the plurality of loops including at least a first loop and a second loop nested in the first loop. For example, the first loop may include at least one first- loop instruction, which is outside the second loop. For example, compiler 160 (Fig. 1) may be configured to identify the loop nest, for example, based on the source code 112 (Fig. 1), e.g., as descried above.

[000543] In some demonstrative aspects, as indicated at block 604, the method may include generating AGU configuration code to configure an AGU of the target processor based on the first-loop instruction. For example, the AGU configuration code may configure a first dimension of the AGU based on the first loop, and a second dimension of the AGU based on the second loop. For example, the AGU configuration code may configure the second dimension of the AGU to configure a memory-access operation to be performed at a start of the second loop or at an end of the second loop. For example, the memory-access operation may be based on the first-loop instruction. For example, compiler 160 (Fig. 1) may be configured to generate the AGU configuration code to configure the AGU of the target processor 180 (Fig. 1) based on the first-loop instruction, e.g., as descried above.

[000544] In some demonstrative aspects, as indicated at block 606, the method may include generating the target code based on compilation of the source code. For example, the target code may be based on the AGU configuration code. For example, compiler 160 (Fig. 1) may be configured to generate target code 115 (Fig. 1), which is based on the AGU configuration code, for example, based on compilation of the source code 112 (Fig. 1), e.g., as descried above.

[000545] Reference is made to Fig. 7, which schematically illustrates a product of manufacture 700, in accordance with some demonstrative aspects. Product 700 may include one or more tangible computer-readable (“machine -readable”) non-transitory storage media 702, which may include computer-executable instructions, e.g., implemented by logic 704, operable to, when executed by at least one computer processor, enable the at least one computer processor to implement one or more operations at device 102 (Fig. 1), server 170 (Fig. 1), and/or compiler 160 (Fig. 1), to cause device 102 (Fig. 1), server 170 (Fig. 1), and/or compiler 160 (Fig. 1) to perform, trigger and/or implement one or more operations and/or functionalities, and/or to perform, trigger and/or implement one or more operations and/or functionalities described with reference to the Figs. 1-6, and/or one or more operations described herein. The phrases “non-transitory machine-readable medium” and “computer- readable non-transitory storage media” may be directed to include all computer- readable media, with the sole exception being a transitory propagating signal.

[000546] In some demonstrative aspects, product 700 and/or machine-readable storage media 702 may include one or more types of computer-readable storage media capable of storing data, including volatile memory, non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and the like. For example, machine-readable storage media 702 may include, RAM, DRAM, Double-Data-Rate DRAM (DDR-DRAM), SDRAM, static RAM (SRAM), ROM, programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory (e.g., NOR or NAND flash memory), content addressable memory (CAM), polymer memory, phase-change memory, ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, a disk, a hard drive, and the like. The computer-readable storage media may include any suitable media involved with downloading or transferring a computer program from a remote computer to a requesting computer carried by data signals embodied in a carrier wave or other propagation medium through a communication link, e.g., a modem, radio or network connection.

[000547] In some demonstrative aspects, logic 704 may include instructions, data, and/or code, which, if executed by a machine, may cause the machine to perform a method, process and/or operations as described herein. The machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware, software, firmware, and the like.

[000548] In some demonstrative aspects, logic 704 may include, or may be implemented as, software, a software module, an application, a program, a subroutine, instructions, an instruction set, computing code, words, values, symbols, and the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a processor to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language, machine code, and the like.

EXAMPLES

[000549] The following examples pertain to further aspects.

[000550] Example 1 includes a product comprising one or more tangible computer- readable non-transitory storage media comprising computer-executable instructions operable to, when executed by at least one processor, enable the at least one processor to cause a compiler to identify a loop nest based on a source code to be compiled into a target code to be executed by a target processor, the loop nest comprising a plurality of loops, the plurality of loops comprising at least a first loop and a second loop nested in the first loop, wherein the first loop comprises at least one first-loop instruction, which is outside the second loop; generate Address Generation Unit (AGU) configuration code to configure an AGU of the target processor based on the first- loop instruction, wherein the AGU configuration code is to configure a first dimension of the AGU based on the first loop, and to configure a second dimension of the AGU based on the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure a memory-access operation to be performed at a start of the second loop or at an end of the second loop, wherein the memory-access operation is based on the first-loop instruction; and generate the target code based on compilation of the source code, wherein the target code is based on the AGU configuration code.

[000551] Example 2 includes the subject matter of Example 1, and optionally, wherein the plurality of loops comprises a third loop nested in the first loop, the second loop is nested in the third loop, the first-loop instruction is outside the third loop, wherein the AGU configuration code is to configure a third dimension of the AGU based on the third loop, wherein the AGU configuration code is to configure the third dimension to configure the memory-access operation to be performed at the start of the second loop or at the end of the second loop.

[000552] Example 3 includes the subject matter of Example 2, and optionally, wherein the third loop comprises a third-loop instruction, which is outside the second loop, wherein the AGU configuration code is to configure an other AGU of the target processor based on the third-loop instruction, wherein the AGU configuration code is to configure a first dimension of the other AGU based on the third loop, and to configure a second dimension of the other AGU based on the second loop, wherein the AGU configuration code is to configure the second dimension of the other AGU to configure an other memory-access operation to be performed at the start of the second loop or at the end of the second loop, wherein the other memory-access operation is based on the third-loop instruction. [000553] Example 4 includes the subject matter of Example 3, and optionally, wherein the instructions, when executed, cause the compiler to transform the loop nest into a transformed loop comprising the memory-access operation and the other memory access operation, wherein the target code is based on the transformed loop.

[000554] Example 5 includes the subject matter of any one of Examples 2-4, and optionally, wherein the AGU configuration code is to set a Maximum (Max) parameter of the second dimension of the AGU and a Max parameter of the third dimension of the AGU based on an entry size corresponding to the first-loop instruction.

[000555] Example 6 includes the subject matter of any one of Examples 1-5, and optionally, wherein the AGU configuration code is to set a base parameter of the AGU based on a memory pointer of the first-loop instruction, and to set a Maximum (Max) parameter of the second dimension of the AGU based on an entry size corresponding to the first-loop instruction.

[000556] Example 7 includes the subject matter of any one of Examples 1-6, and optionally, wherein the at least one first-loop instruction comprises a pre-header instruction to be performed before a first iteration of the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure the memory-access operation to be performed only at the start of the second loop.

[000557] Example 8 includes the subject matter of Example 7, and optionally, wherein the AGU configuration code is to set a Minimum (Min) parameter of the second dimension of the AGU to zero.

[000558] Example 9 includes the subject matter of Example 7 or 8, and optionally, wherein the instructions, when executed, cause the compiler to, based on a determination that the pre-header instruction comprises a load operation, configure the AGU configuration code to set a step parameter of the second dimension of the AGU to zero.

[000559] Example 10 includes the subject matter of any one of Examples 7-9, and optionally, wherein the instructions, when executed, cause the compiler to, based on a determination that the pre-header instruction comprises a store operation, configure the AGU configuration code to set a step parameter of the second dimension of the AGU based on an entry size corresponding to the pre-header instruction. [000560] Example 11 includes the subject matter of any one of Examples 7-10, and optionally, wherein the AGU configuration code is to set a base parameter of the AGU to a memory pointer of the pre-header instruction.

[000561] Example 12 includes the subject matter of any one of Examples 1-11, and optionally, wherein the at least one first-loop instruction comprises a latch instruction to be performed after a last iteration of the second loop, wherein the AGU configuration code is to configure the second dimension of the AGU to configure the memory-access operation to be performed only at the end of the second loop.

[000562] Example 13 includes the subject matter of Example 12, and optionally, wherein the latch instruction comprises a load operation.

[000563] Example 14 includes the subject matter of Example 13, and optionally, wherein the AGU configuration code is to set a base parameter of the AGU to a memory pointer of the latch instruction, to set a Minimum (Min) parameter of the second dimension of the AGU to zero, to set a Maximum (Max) parameter of the second dimension of the AGU to an entry size corresponding to the latch instruction, and to set a step parameter of the second dimension of the AGU to zero.

[000564] Example 15 includes the subject matter of Example 12, and optionally, wherein the latch instruction comprises a store operation.

[000565] Example 16 includes the subject matter of Example 15, and optionally, wherein the AGU configuration code is to set a base parameter of the AGU based on a first parameter value, a second parameter value and a third parameter value, wherein the first parameter value comprises an entry size corresponding to the latch instruction, the second parameter value comprises a total count of iterations over one or more loops, which are in the first loop and include the second loop, the third parameter value comprising a count of dimensions of the AGU corresponding to the one or more loops.

[000566] Example 17 includes the subject matter of Example 16, and optionally, wherein the AGU configuration code is to set the base parameter, denoted Base, of the AGU as follows:

Base = OrigBase + EntrySize * ([ E TripCount(L)]- lnnerDims), wherein OrigBase denotes a memory pointer of the latch instruction, EntrySize denotes the entry size, [E TripCount(L)] denotes the total count of iterations over the one or more loops, and #InnerDims denotes the count of dimensions of the AGU corresponding to the one or more loops.

[000567] Example 18 includes the subject matter of any one of Examples 15-17, and optionally, wherein the AGU configuration code is to set a step parameter of the second dimension of the AGU based on an entry size corresponding to the latch instruction; to set a Minimum (Min) parameter of the second dimension of the AGU based on the entry size and a count of iterations in the second loop; and to set a Maximum (Max) parameter of the second dimension of the AGU based on the Min parameter of the second dimension of the AGU and the entry size.

[000568] Example 19 includes the subject matter of Example 18, and optionally, wherein the AGU configuration code is to set the step parameter of the second dimension of the AGU based on an additive inverse of the entry size.

[000569] Example 20 includes the subject matter of Example 18 or 19, and optionally, wherein the AGU configuration code is to set the Min parameter of the second dimension of the AGU based on a product of an additive inverse of the entry size and a subtraction result of subtracting one from the count of iterations in the second loop.

[000570] Example 21 includes the subject matter of any one of Examples 18-20, and optionally, wherein the AGU configuration code is to set the Max parameter of the second dimension of the AGU based on a sum of the Min parameter of the second dimension of the AGU and the entry size.

[000571] Example 22 includes the subject matter of any one of Examples 16-21, and optionally, wherein the plurality of loops comprises a third loop nested in the first loop, the second loop is nested in the third loop, the latch instruction is outside the third loop, wherein the AGU configuration code is to configure a third dimension of the AGU based on the third loop.

[000572] Example 23 includes the subject matter of Example 22, and optionally, wherein the AGU configuration code is to set a step parameter of the third dimension of the AGU based on the entry size, to set a Minimum (Min) parameter of the third dimension of the AGU based on the entry size and a count of iterations in the third loop, and to set a Maximum (Max) parameter of the third dimension of the AGU based on the Min parameter of the third dimension of the AGU and the entry size.

[000573] Example 24 includes the subject matter of any one of Examples 1-23, and optionally, wherein the instructions, when executed, cause the compiler to transform the loop nest into a transformed loop comprising the memory-access operation, wherein the target code is based on the transformed loop.

[000574] Example 25 includes the subject matter of Example 24, and optionally, wherein the transformed loop comprises a perfect flat loop, in which all compute operations of the loop nest are implemented in the transformed loop.

[000575] Example 26 includes the subject matter of Example 24 or 25, and optionally, wherein the transformed loop comprises a fully collapsed loop comprising only a single-basic-block loop based on the plurality of loops.

[000576] Example 27 includes the subject matter of any one of Examples 1-26, and optionally, wherein the memory-access operation comprises a load operation or a store operation.

[000577] Example 28 includes the subject matter of any one of Examples 1-27, and optionally, wherein the source code comprises Open Computing Language (OpenCL) code.

[000578] Example 29 includes the subject matter of any one of Examples 1-28, and optionally, wherein the computer-executable instructions, when executed, cause the compiler to compile the source code into the target code according to a Low Level Virtual Machine (LLVM) based (LLVM-based) compilation scheme.

[000579] Example 30 includes the subject matter of any one of Examples 1-29, and optionally, wherein the target code is configured for execution by a Very Long Instruction Word (VLIW) Single Instruction/Multiple Data (SIMD) target processor.

[000580] Example 31 includes the subject matter of any one of Examples 1-30, and optionally, wherein the target code is configured for execution by a target vector processor.

[000581] Example 32 includes a compiler configured to perform any of the described operations of any of Examples 1-31. [000582] Example 33 includes a computing device configured to perform any of the described operations of any of Examples 1-31.

[000583] Example 34 includes a computing system comprising at least one memory to store instructions; and at least one processor to retrieve instructions from the memory and execute the instructions to cause the computing system to perform any of the described operations of any of Examples 1-31.

[000584] Example 35 includes a computing system comprising a compiler to generate target code according to any of the described operations of any of Examples 1-31, and a processor to execute the target code.

[000585] Example 36 comprises an apparatus comprising means for executing any of the described operations of any of Examples 1-31.

[000586] Example 37 comprises an apparatus comprising: a memory interface; and processing circuitry configured to: perform any of the described operations of any of Examples 1-31.

[000587] Example 38 comprises a method comprising any of the described operations of any of Examples 1-31.

[000588] Functions, operations, components and/or features described herein with reference to one or more aspects, may be combined with, or may be utilized in combination with, one or more other functions, operations, components and/or features described herein with reference to one or more other aspects, or vice versa.

[000589] While certain features have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure.