Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
RAM BASED IMPLEMENTATION FOR SCALABLE, RELIABLE HIGH SPEED EVENT COUNTERS
Document Type and Number:
WIPO Patent Application WO/2010/012633
Kind Code:
A2
Abstract:
There is broadly contemplated herein an arrangement whereby each event source feeds a small dedicated "pre-counter" while an actual count is kept in a 64-bit wide RAM. Such an implementation preferably may involve a state machine that simply sweeps through the pre-counters, in a predetermined fixed order. Preferably, the state machine will access each pre-counter, add the value from the pre-counter to a corresponding RAM location, and then clear the pre-counter. Accordingly, the pre-counters merely have to be wide enough such that even at a maximal event rate, the pre-counter will not be able to wrap (i.e., reach capacity or overflow) before the "sweeper" state machine accesses the pre-counter.

Inventors:
BENDER CARL ALFRED (US)
HOCHSCHILD PETER HEINER (US)
MISRA ASHUTOSH (IN)
SWETZ RICHARD (US)
Application Number:
PCT/EP2009/059395
Publication Date:
February 04, 2010
Filing Date:
July 22, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
IBM (US)
IBM UK (GB)
BENDER CARL ALFRED (US)
HOCHSCHILD PETER HEINER (US)
MISRA ASHUTOSH (IN)
SWETZ RICHARD (US)
Foreign References:
EP0617529A21994-09-28
Other References:
None
Attorney, Agent or Firm:
ROBERTS, Scott (Intellectual Property LawHursley Park,Winchester, Hampshire SO21 2JN, GB)
Download PDF:
Claims:
CLAIMS

1. A system comprising: a main memory; said main memory comprising RAM; a plurality of preliminary counters each fed by a corresponding event source; and a sweeper which accesses preliminary counters and feeds values from preliminary counters to said RAM; said RAM acting to accumulate counting data relating to the event sources.

2. The system according to Claim 1 , wherein said sweeper is operable to access said preliminary counters in a predetermined order.

3. The system according to any of claims 1 or 2, wherein said sweeper is operable to access each of 2M preliminary counters in sequential order in a clock cycle of length 2M.

4. The system according to Claim 3, wherein said sweeper comprises a free running M- bit counter which facilitates sequential access of said sweeper to said preliminary counters.

5. The system according to any of the above claims, wherein each preliminary counter has a width of M+l bits.

6. The system according to any of the above claims, wherein said sweeper comprises a state machine.

7. The system according to any of the above claims, wherein said sweeper is further operable to clear each preliminary counter.

8. The system according to any of the above claims, wherein said sweeper is operable to add an output value from a preliminary counter to a corresponding RAM location, whereby a previous count value is incremented at said corresponding RAM location.

9. The system according to any of the above claims, wherein said RAM comprises a read port and a write port.

10. A method comprising: providing a plurality of preliminary counters each fed by a corresponding event source; accessing the preliminary counters; feeding values from the preliminary counters to a RAM; and accumulating at the RAM counting data relating to the event sources.

11. The method according to Claim 10, wherein said accessing comprises accessing the preliminary counters in a predetermined order.

12. The method according to any of claims 10 to 11, wherein said accessing in a predetermined order comprises accessing each of 2M preliminary counters in sequential order in a clock cycle of length 2M.

13. The method according to Claim 12, wherein said accessing in sequential order comprises employing a free running M-bit counter.

14. The method according to any of claims 10 to 13 , wherein each preliminary counter has a width of M+l bits.

15. The method according to any of claims 10 to 14, further comprising clearing a preliminary counter subsequent to feeding a value from a preliminary counter to the RAM.

16. The method according to any of claims 10 to 15, wherein said feeding comprises adding an output value from a preliminary counter to a corresponding RAM location, whereby a previous count value is incremented at the corresponding RAM location.

17. The method according to any of claims 10 to 16, wherein the RAM comprises a read port and a write port.

18. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform a method comprising: providing a plurality of preliminary counters each fed by a corresponding event source; accessing the preliminary counters; feeding values from the preliminary counters to a RAM; and accumulating at the RAM counting data relating to the event sources.

19. The program storage device according to Claim 18, wherein said accessing comprises accessing the preliminary counters in a predetermined order.

20. The program storage device according to any of claims 18 to 19, wherein said feeding comprises adding an output value from a preliminary counter to a corresponding RAM location, whereby a previous count value is incremented at the corresponding RAM location.

21. A computer program comprising computer-implementable instructions for carrying out the steps of a method according to any of claims 10 to 17.

Description:
RAM BASED IMPLEMENTATION FOR SCALABLE, RELIABLE HIGH SPEED EVENT COUNTERS

Field of the Invention

The present invention relates generally to computer systems and to event counters that are employed in connection therewith.

Background of the Invention

As known in the computing arts, an event counter is a unit that can count occurrences of a certain condition. Hardware event counters are used for statistical counts, debugging during hardware bring-up and for performance characterization. The insight attained from performance monitors is used to resolve performance bottlenecks and in fine tuning configurable parameters in the system. Event counters are also used to trigger actions. For example, when an event counter reaches certain threshold, it could be used to generate an interrupt. In applications for event counters such as these and more, there is a strong need for precise counts.

A conventional implementation of a 64 bit counter involves the use of 64 registers as storage elements and of an adder to increment the count. Hundreds of 64 bit counters can normally occupy a significant amount of silicon space on ASICs (application-specific integrated circuits). In the case of an FPGA (field programmable gate array) based implementation, where there are limited number of logic building blocks (e.g., Look-Up Tables, or LUTs), the conventional manner of implementing hundreds of counters will consume a very significant number of LUTs.

Since ASICs and FPGAs provide a dense Random Access Memory (RAM) core, this has motivated a desire to substitute RAM for registers. Some attempts have been made to address this problem by implementing a plurality of counters into RAM (e.g., "RAM based events counter apparatus and method" (U.S. Patent No. 5,089,957) and "System for gathering data representing the number of event occurrences" (U.S. Patent No. 4,206,346). However, these prior attempts have presented potentially lossy schemes that do not take into account the frequency of the counted events. Accordingly, a compelling need has been recognized in connection with improving upon such shortcomings and disadvantages.

Summary of the Invention

The present invention provides an arrangement whereby each event source feeds a small dedicated "pre-counter" while an actual count is kept in a 64-bit wide RAM.

Preferably, such an implementation may involve a state machine that simply sweeps through the pre-counters, in a predetermined fixed order. Preferably, the state machine will access each pre-counter, add the value from the pre-counter to a corresponding RAM location, and then clear the pre-counter. Accordingly, the pre-counters merely have to be wide enough such that even at a maximal event rate, the pre-counter will not be able to wrap (i.e., reach capacity or overflow) before the "sweeper" state machine accesses the pre-counter.

Viewed from one aspect, the present invention provides a system comprising: a main memory; the main memory comprising RAM; a plurality of preliminary counters each fed by a corresponding event source; and a sweeper which accesses preliminary counters and feeds values from preliminary counters to the RAM; the RAM acting to accumulate counting data relating to the event sources.

Viewed from a further aspect, the present invention provides a method comprising: providing a plurality of preliminary counters each fed by a corresponding event source; accessing the preliminary counters; feeding values from the preliminary counters to a RAM; and accumulating at the RAM counting data relating to the event sources.

Viewed from a further aspect, the present invention provides a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform a method comprising: providing a plurality of preliminary counters each fed by a corresponding event source; accessing the preliminary counters; feeding values from the preliminary counters to a RAM; and accumulating at the RAM counting data relating to the event sources.

Brief Description of the Drawings

A preferred embodiment of the invention will now be described, by way of example only, and with reference to the following drawing:

Figure 1 depicts a computer system, in accordance with the prior art, and in which a preferred embodiment of the present invention may be implemented;

Figure 2 depicts a known arrangement for implementing 2M 2N-bit wide counters in accordance with the prior art; and

Figure 3 depicts a counter arrangement in accordance with at least one preferred embodiment of the present invention.

Detailed Description of the Preferred Embodiments

For a better understanding of the present invention, together with other and further features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying drawings, and the scope of the invention will be pointed out in the appended claims.

It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the apparatus, system, and method of the present invention, as represented in FIGS. 1 through 3, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected preferred embodiments of the invention.

Reference throughout this specification to "one embodiment" or "an embodiment" (or the like) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of preferred embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of a preferred embodiment of the invention.

The illustrated preferred embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals or other labels throughout. The following description is intended only by way of example, and simply illustrates certain selected preferred embodiments of devices, systems, and processes that are consistent with the invention as claimed herein.

Figure 1 schematically depicts a computer system 12, in accordance with the prior art, and in which a preferred embodiment of the present invention may be implemented. The illustrative embodiment depicted in Figure 1 may be a notebook computer system, such as one of the Lenovo ® ThinkPad ® series of personal computers, however, as will become apparent from the following description, the present invention is applicable to any data processing system (Lenovo and ThinkPad are trademarks of Lenovo in the United States, other countries, or both). Notebook computers may alternatively be referred to as "notebooks", "laptops", "laptop computers" or "mobile computers" herein, and these terms should be understood as being essentially interchangeable with one another. As shown in Figure 1, computer system 12 includes at least one system processor 42, which is coupled to a Read-Only Memory (ROM) 40 and a system memory 46 by a processor bus 44. System processor 42, which may comprise one of the AMD™ line of processors produced by AMD Corporation or a processor produced by Intel ® , is a general-purpose processor that executes boot code 41 stored within ROM 40 at power-on and thereafter processes data under the control of operating system and application software stored in system memory 46. Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States and other countries. AMD is a trademark of Advanced Micro Devices, Inc. or its subsidiaries in the United States and other countries. System processor 42 is coupled via processor bus 44 and host bridge 48 to Peripheral Component

Interconnect (PCI) local bus 50.

PCI local bus 50 supports the attachment of a number of devices, including adapters and bridges. Among these devices is network adapter 66, which interfaces computer system 12 to a Local Area Network (LAN), and graphics adapter 68, which interfaces computer system

12 to display 69. Communication on PCI local bus 50 is governed by local PCI controller 52, which is in turn coupled to non-volatile random access memory (NVRAM) 56 via memory bus 54. Local PCI controller 52 can be coupled to additional buses and devices via a second host bridge 60.

Computer system 12 further includes Industry Standard Architecture (ISA) bus 62, which is coupled to PCI local bus 50 by ISA bridge 64. Coupled to ISA bus 62 is an input/output (I/O) controller 70, which controls communication between computer system 12 and attached peripheral devices such as a keyboard and mouse. In addition, I/O controller 70 supports external communication by computer system 12 via serial and parallel ports. A disk controller 72 is in communication with a disk drive 200. Of course, it should be appreciated that the system 12 may be built with different chip sets and a different bus structure, as well as with any other suitable substitute components, while providing comparable or analogous functions to those discussed above.

Figure 2 depicts a known arrangement 102 for implementing 2M 2N-bit wide counters in accordance with the prior art. An M bit address decoder 104 is used to initialize counter values. Each instance of the counter 106 includes 2N registers and an adder to increment the count based on an event pulse. A 2M: 1 multiplexer 208 is then needed to read out each bit of each 2N bit counter.

In contrast, in Figure 3, which depicts a counter arrangement 202 in accordance with at least one preferred embodiment of the present invention, instead of having 2M instances of a 2N bit counter, there are instead provided 2M instances of smaller pre-counters (210). The actual 2M counts are preferably kept in a 2N bit wide RAM 214 (itself which may be contained in system memory 46 shown in Fig. 1), in a manner now to be described.

As alluded to above, implementation 202 involves the use of a state machine 209 that sweeps through all 2M pre-counters, in sequential order, using a free running M-bit counter (212). The machine 209 preferably assimilates the output of each pre-counter at a reader 211, adds the value to the corresponding RAM location and clears that pre-counter. Particularly, the adding preferably takes place at adder 218, which adds the pre-counter value to an old counter value that has been fed back from RAM 214. Gate 216 accepts the output of adder 218, and output of an "M" bit comparator 220 (itself having as input counter addresses and output from the "M" bit free running counter 212), whereupon output from here proceeds into RAM 214. For RAM 214 in its role as the "main" or "primary" counter, a pre-load or initialization can take place through gate 216, whose output also can forward counter read data as shown.

Since the state machine 209 accesses each pre-counter once during every 2M clock cycle, the pre-counters merely have to be wide enough to hold a maximum value of 2M at a maximum event rate. By choosing pre-counters to be "M+l" bit(s) wide, the implementation guarantees that those small pre-counters will never overflow before the sweeper state machine 211 accesses them.

To put it another way, it should be appreciated that instead of being burdened with the "bulk" normally provided by 2M dedicated event counters of width 2N, the RAM 214 will actually take on the main counting task while much smaller event counters (herein the "pre- counters", or "preliminary counters") are actually set with the task of essentially monitoring incremental event counting changes and forwarding the same on to the RAM 214. Thus, large event counters are not actually needed to monitor and track incremental event counting changes; these are actually assimilated at the RAM 214 to provide an economy of resources that is simply not possible with 2M dedicated event counters of width 2N. Since the state machine 209 also includes components such as the free running "M" bit counter 212 and the

"M" bit comparator 220, the incremental changes in the event counters are easily "indexed" (i.e., augmented with address/location information) that facilitates their easy incorporation into the "main counts" being accumulated in the RAM 214. Effectively, the RAM 214 takes on the analogous role of a large number of event counters without requiring the inefficient use of resources associated with a large number of dedicated event counters.

Preferably provided is a dual port RAM 214 as shown, whereby the old value of a counter can be read through a "read" port while the other counter can be updated through a "write" port in the same clock cycle. However, a single port RAM is also conceivable, in that RAM reading and writing would be interleaved. There, each pre-counter would get serviced after every 2M+1 clock cycles, thus requiring the pre-counter to be "M+2" bits wide.

To help illustrate the efficiencies discussed and alluded to above, the following table (Table 1) compares resource utilization in implementing 128 64-bit wide counters using conventional method (left) and a presently proposed method (right) on an Altera ® Stratix ® II

FPGA (Altera and Stratix are registered trademarks of Altera in the United States Patent and in other jurisdictions and countries). In FPGA technology, Look-Up Tables (LUTs) are basic logic building blocks. Each LUT includes a 6-input table and a register. In a presently preferred embodiment of the present invention ("proposed method" in Table 1), the state machine serves each pre-counter at every 128 clock cycle, so 8 bit wide pre-counters are required. Table 1 : Resource Utilization on FPGA (128 "64-bit counters", ie M=7 and N=6)

It will be appreciated that pre-counters, as broadly contemplated herein, can be exploited in recursive fashion. For example, one can use "pre-pre-counters" whereby for each of a number of predetermined groups of pre-counters (each group containing, e.g., 16 pre- counters), there could be maintained a small RAM of M+l bit counts. Each pre-pre-counter could be 5 bits in width, whereupon bottom-level state machines would each sweep through the 16 entry RAMs (adding the 5 bits of register to the M+l bits of RAM, resulting in a new M+l bit count). Then, a top-level state machine could "eavesdrop" on, or monitor, this traffic to update its own 2N-bit counts stored in the larger, wider RAM.

hi view of the foregoing, it will be appreciated that an implementation as embodied in accordance with at least one presently preferred embodiment of the present invention will present the following advantages, at the very least, over conventional implementations: precise counting in spite of a RAM based implementation; reduced resource utilization on integrated circuit chips; a frequency of counted events that needs not be less than the clock frequency used in the implementation; and an accuracy in counting suitable for reliable interrupt generation.

Among other advantages are the following: Inherent logic is provided to support reading and preloading of counter, so it does not consume much extra logic to initialize the counters or read the counter values.

The use of RAM hard-core reduces the number of placeable elements, resulting in improved placement.

Instead of having register-based implementation for pre-counters, each group of pre-counters can adopt the same proposed RAM based implementation (see the "pre-pre- counter" approach described above), resulting in even more resource savings.

Reducing the counts with respect to a master-slave LSSD (level-sensitive scan design) results in fewer scan elements and hence lesser fewer scan chains, thereby helping reduce chip testability time.

The use of RAM permits the use of MBIST (memory BIST, or memory built-in self-test), which would improve testability.

Scalability is supported while performance is preserved. Particularly, designers do not have to limit the number of counters, nor do they need to multiplex many events with respect to a limited number of available counters.

It is to be understood that the present invention, in accordance with at least one presently preferred embodiment, includes elements that may be implemented on at least one general- purpose computer running suitable software programs. These may also be implemented on at least one Integrated Circuit or part of at least one Integrated Circuit. Thus, it is to be understood that preferred embodiment of the invention may be implemented in hardware, software, or a combination of both.

If not otherwise stated herein, it is to be assumed that all patents, patent applications, patent publications and other publications (including web-based publications) mentioned and cited herein are hereby fully incorporated by reference herein as if set forth in their entirety herein.