Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM AND METHOD FOR MONITORING A USER TERMINAL
Document Type and Number:
WIPO Patent Application WO/2014/087167
Kind Code:
A1
Abstract:
A method of monitoring a user terminal comprises monitoring performance of a component of the user terminal, and comparing the monitored component performance to a performance measure, wherein the performance measure is determined based on performance of that component of that user terminal during a preceding time period.

Inventors:
KORALA ARAVINDA (GB)
Application Number:
PCT/GB2013/053219
Publication Date:
June 12, 2014
Filing Date:
December 05, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KORALA ASSOCIATES LTD (GB)
International Classes:
G07F19/00; G06Q10/06
Foreign References:
US20060273151A12006-12-07
US20090187360A12009-07-23
US6076174A2000-06-13
US20100281155A12010-11-04
Other References:
None
Attorney, Agent or Firm:
HARGREAVES, Timothy (Atholl Exchange6 Canning Street, Edinburgh EH3 8EG, GB)
Download PDF:
Claims:
CLAIMS

1. A method of monitoring a user terminal, comprising:- monitoring performance of a component of the user terminal; and

comparing the monitored component performance to a performance measure, wherein the performance measure is determined based on performance of that component of that user terminal during a preceding time period.

2. A method according to Claim 1 , comprising determining whether the component and/or user terminal should be subject to maintenance and/or replacement based on the comparison.

3. A method according to Claim 1 , wherein the determining of the performance measure comprises updating the performance measure based on performance of the component during the preceding time period.

4. A method according to Claim 3, wherein the updating of the performance measure comprises assigning a value to the performance measure for the first time.

5. A method according to any preceding claim, wherein the monitoring of performance comprises monitoring a performance parameter, and the performance measure is determined based on measured values of the performance parameter for that component of that user terminal during the preceding time period.

6 A method according to Claim 5, wherein the monitoring of performance comprises determining a value of the performance parameter in respect of a further time period, wherein optionally the further time period comprise a rolling time window determined with respect to the current time.

7. A method according to Claim 5 or 6, wherein the performance parameter comprises at least one of a time to perform an operation, an error rate or a number or proportion of errors.

8. A method according to any preceding claim, wherein the preceding time period comprises a time period determined relative to a time of installation, servicing or first usage of the component or user terminal.

9. A method according to any preceding claim, wherein the performance comprises at least one of average, median or peak performance.

10. A method according to any preceding claim, comprising determining a performance status based on the comparison.

11. A method according to any preceding claim, comprising generating a maintenance signal based on the comparison.

12. A method according to any preceding claim, wherein the component comprises a hardware component, for example a mechanical or electro-mechanical component.

13. A method according to any preceding claim, wherein the component comprises at least one of a cash dispenser, card reader, camera, communication device, user input device, display screen or other display device, PIN entry pad, keypad.

14. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring the time taken to perform an operation or respond to a command.

15. A method according to Claim 14, wherein the operation comprises at least one of card reading, card ejection, card writing, card retaining or retracting, cash dispensing, open shutter operation, close shutter operation, cash retraction.

16. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring error messages.

17. A method according to any preceding claim, wherein the measure comprises a frequency or number of error messages.

18. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring response of the component to commands, and the responses comprises issue of an error message.

19. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring status of the device, for example by querying status of the device and/or receiving a status message.

20. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring messages sent to or from the component, for example messages sent between the component and an application of the user terminal.

21. A method according to any preceding claim, wherein at least one of:- the monitoring of performance comprises monitoring messages via a device interface, optionally an XFS interface.;

the monitoring of performance comprises monitoring XFS data.

22. A method according to any preceding claim, wherein the performance measure comprises at least one threshold.

23. A method according to Claim 22, wherein the performance measure comprises a threshold time to perform an operation, or a threshold number, rate or proportion of errors.

24. A method according to any preceding claim, wherein the comparing of the monitored component performance to the performance measure comprises calculating a percentage or proportion of the performance measure represented by the monitored component performance.

25. A method according to any preceding claim, comprising generating monitoring data comprising a performance status signal representative of the comparison, wherein optionally the performance status signal represents an amount of divergence of the monitored component performance from the performance measure.

26. A method according to any preceding claim, further comprising monitoring performance of a respective component of each of a plurality of user terminals, providing monitoring data to a remote monitoring system, and determining at the remote monitoring system at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

27. A monitoring method comprising receiving monitoring data from a plurality of user terminals, the monitoring data being representative of performance of at least one component of the user terminals, and the method comprising determining at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

28. A method according to Claim 27, wherein, for each component of each user terminal, the monitoring data represents performance of that component relative to a respective performance measure determined based on performance of that component of that user terminal during a preceding time period.

29. A method according to Claim 27 or 28, wherein the monitoring data is obtained using a method of monitoring a user terminal according to any of Claims 1 to 25.

30. A method according to any of Claims 27 to 29, comprising providing an operator interface for displaying the results of the monitoring of performance.

31. A method according to any of Claims 27 to 30, comprising determining a ranking of the performance of components and/or user terminals.

32. A method according to Claim 31 , wherein the ranking comprises a combined ranking, and each entry in the combined ranking represents performance of a respective plurality of components of a respective, different user terminal.

33. A method according to Claim 31 or 32, wherein the or each ranking represents a ranking of importance or urgency of maintenance or replacement of one or more components.

34. A method according to any of Claims 31 to 33, wherein the determining at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data, comprises determining a target number of components or user terminals to maintain or replace, and selecting components or user terminals for maintenance or replacement based on the ranking and on the target number.

35. A method according to Claim 34, comprising selecting components or user terminals in order based on the ranking until the target number is reached.

36. A method according to Claim 34 or 35, wherein the target number is determined based on an expected and/or historical failure rate.

37. A method according to any of Claims 31 to 36, wherein the ranking comprises a ranking of divergence of monitored performance from the or a performance measure.

38. A method according to any of Claims 27 to 37, comprising categorising the performance of different components and/or user terminals into a plurality of categories.

39. A method according to any of Claims 27 to 38 as dependent on Claim 30, comprising displaying the ranking, combined ranking or categorisation using the operator interface.

40. A method according to any of Claims 27 to 39, comprising displaying at least one marker representative of the location of a user terminal on a map view, the appearance of the marker being representative of the performance or ranking of the user terminal or at least one component of the user terminal.

41. A method according to any of Claims 27 to 40, wherein the determining of the at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement is further based on user input, for example user input provided via the operator interface.

AMENDED CLAIMS

received by the International Bureau on 02 May 2014 (02.05.2014)

1. A method of monitoring a user terminal, comprising:- monitoring performance of a component of the user terminal; and

comparing the monitored component performance to a performance measure, wherein the performance measure is determined based on performance of that component of that user terminal during a preceding time period.

2. A method according to Claim 1 , comprising determining whether the component and/or user terminal should be subject to maintenance and/or replacement based on the comparison.

3. A method according to Claim 1 , wherein the determining of the performance measure comprises updating the performance measure based on performance of the component during the preceding time period.

4. A method according to Claim 3, wherein the updating of the performance measure comprises assigning a value to the performance measure for the first time.

5. A method according to any preceding claim, wherein the monitoring of performance comprises monitoring a performance parameter, and the performance measure is determined based on measured values of the performance parameter for that component of that user terminal during the preceding time period.

6 A method according to Claim 5, wherein the monitoring of performance comprises determining a value of the performance parameter in respect of a further time period, wherein optionally the further time period comprise a rolling time window determined with respect to the current time.

7. A method according to Claim 5 or 6, wherein the performance parameter comprises at least one of a time to perform an operation, an error rate or a number or proportion of errors.

8. A method according to any preceding claim, wherein the preceding time period comprises a time period determined relative to a time of installation, servicing or first usage of the component or user terminal.

9. A method according to any preceding claim, wherein the performance comprises at least one of average, median or peak performance.

10. A method according to any preceding claim, comprising determining a performance status based on the comparison.

11. A method according to any preceding claim, comprising generating a maintenance signal based on the comparison.

12. A method according to any preceding claim, wherein the component comprises a hardware component, for example a mechanical or electro-mechanical component.

13. A method according to any preceding claim, wherein the component comprises at least one of a cash dispenser, card reader, camera, communication device, user input device, display screen or other display device, PIN entry pad, keypad.

14. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring the time taken to perform an operation or respond to a command.

15. A method according to Claim 14, wherein the operation comprises at least one of card reading, card ejection, card writing, card retaining or retracting, cash dispensing, open shutter operation, close shutter operation, cash retraction.

16. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring error messages.

17. A method according to any preceding claim, wherein the measure comprises a frequency or number of error messages.

18. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring response of the component to commands, and the responses comprises issue of an error message.

19. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring status of the device, for example by querying status of the device and/or receiving a status message.

20. A method according to any preceding claim, wherein the monitoring of performance of the component comprises monitoring messages sent to or from the component, for example messages sent between the component and an application of the user terminal.

21. A method according to any preceding claim, wherein at least one of :- the monitoring of performance comprises monitoring messages via a device interface, optionally an XFS interface.;

the monitoring of performance comprises monitoring XFS data.

22. A method according to any preceding claim, wherein the performance measure comprises at least one threshold.

23. A method according to Claim 22, wherein the performance measure comprises a threshold time to perform an operation, or a threshold number, rate or proportion of errors.

24. A method according to any preceding claim, wherein the comparing of the monitored component performance to the performance measure comprises calculating a percentage or proportion of the performance measure represented by the monitored component performance.

25. A method according to any preceding claim, comprising generating monitoring data comprising a performance status signal representative of the comparison, wherein optionally the performance status signal represents an amount of divergence of the monitored component performance from the performance measure.

26. A method according to any preceding claim, further comprising monitoring performance of a respective component of each of a plurality of user terminals, providing monitoring data to a remote monitoring system, and determining at the remote monitoring system at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

27. A monitoring method comprising receiving monitoring data from a plurality of user terminals, the monitoring data being representative of performance of at least one component of the user terminals, and the method comprising determining at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

28. A method according to Claim 27, wherein, for each component of each user terminal, the monitoring data represents performance of that component relative to a respective performance measure determined based on performance of that component of that user terminal during a preceding time period.

29. A method according to Claim 27 or 28, wherein the monitoring data is obtained using a method of monitoring a user terminal according to any of Claims 1 to 25.

30. A method according to any of Claims 27 to 29, comprising providing an operator interface for displaying the results of the monitoring of performance.

31. A method according to any of Claims 27 to 30, comprising determining a ranking of the performance of components and/or user terminals.

32. A method according to Claim 31 , wherein the ranking comprises a combined ranking, and each entry in the combined ranking represents performance of a respective plurality of components of a respective, different user terminal.

33. A method according to Claim 31 or 32, wherein the or each ranking represents a ranking of importance or urgency of maintenance or replacement of one or more components.

34. A method according to any of Claims 31 to 33, wherein the determining at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data, comprises determining a target number of components or user terminals to maintain or replace, and selecting components or user terminals for maintenance or replacement based on the ranking and on the target number.

35. A method according to Claim 34, comprising selecting components or user terminals in order based on the ranking until the target number is reached.

36. A method according to Claim 34 or 35, wherein the target number is determined based on an expected and/or historical failure rate.

37. A method according to any of Claims 31 to 36, wherein the ranking comprises a ranking of divergence of monitored performance from the or a performance measure.

38. A method according to any of Claims 27 to 37, comprising categorising the performance of different components and/or user terminals into a plurality of categories.

39. A method according to any of Claims 27 to 38 as dependent on Claim 30, comprising displaying the ranking, combined ranking or categorisation using the operator interface.

40. A method according to any of Claims 27 to 39, comprising displaying at least one marker representative of the location of a user terminal on a map view, the appearance of the marker being representative of the performance or ranking of the user terminal or at least one component of the user terminal.

41. A method according to any of Claims 27 to 40, wherein the determining of the at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement is further based on user input, for example user input provided via the operator interface.

42. A system for monitoring performance of a user terminal, the system comprising a processing resource configured to monitor performance of a component of the user terminal and to compare the monitored component performance to a performance measure, wherein the performance measure is determined by the processing resource based on performance of that component of that user terminal during a preceding time period.

43. A monitoring system for receiving monitoring data from a plurality of user terminals, the monitoring data being representative of performance of at least one component of the user terminals, and the monitoring system being configured to determine at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

44. A computer program product comprising computer readable instructions that are executable to perform a method according to any of Claims 1 to 41.

Description:
A system and method for monitoring a user terminal

Field of the invention The present invention relates to a system and method for monitoring performance of user terminals, for example ATMs.

Background to the invention ATM terminals are very widely used to perform financial or other transactions, for example to allow users to withdraw cash. Usually, in order to withdraw cash a user inserts a financial transaction card, for example a credit or debit card, into an ATM terminal, enters a PIN code, and performs transactions via a sequence of screens displayed on the terminal.

Financial institutions or other ATM terminal operators are often responsible for operation of large numbers of ATM terminals. The maintenance of such user terminals can be a major undertaking, given the large numbers of terminals involved, the variety of locations of terminals, the variety of age and types of components used in different terminals, and the different usage levels of different terminals. The time and effort involved in sending maintenance personnel to a user terminal to fix the user terminal or to replace malfunctioning components can be significant.

The malfunctioning of components that leads to inoperability or reduced performance of a user terminal can cause significant inconvenience to users.

It has been suggested to predict when malfunctioning of components may occur, or when maintenance or replacement of components may be required. Such suggested prediction techniques are based on the monitoring of the age or usage of components and the comparison of such age or usage to databases of known lifetimes or failure rates of components based on historical data. However, the accuracy of such techniques depends on the amount and accuracy of historical data concerning components that is available. Such data may not be available or accurate in the case of newer or less widely used components. Furthermore, the actual lifetime or failure rate of a particular component can depend on the configuration, nature of usage, location or the other components that are provided within an ATM or other user terminal. Iri addition, such suggested techniques are based on statistical analysis and whilst they may be accurate in predicting lifetime or failure of a type of component on average, they are less accurate in predicting the lifetime or failure of any one example of a component in particular. Summary of the invention In a first aspect of the invention there is provided a method of monitoring a user terminal, comprising monitoring performance of a component of the user terminal, and comparing the monitored component performance to a performance measure. The method may comprise determining a performance status in dependence on the comparison. The method may comprise determining whether the component and/or user terminal should be subject to maintenance and/or replacement in dependence on the comparison. The determination of whether the component and/or user terminal should be subject to maintenance and/or replacement, and/or the determined value of the performance measure may be independent of the amount of usage of Ihe component. The method may comprise generating a maintenance signal suggesting maintenance or replacement, in dependence on the comparison. The method may comprise maintaining and/or replacing the component and/or user terminal. The performance measure may comprise a performance metric.

By monitoring actual performance of components, early indications of problems can be detected. Replacement or maintenance of components can thus be performed at an early stage, based on actual performance of individual components. It can be possible to ensure maintenance or replacement of components at a suitable time, avoiding breakdown of user termina.s whilst also ensuring that components are not replaced unnecessarily when they are functioning well.

The performance measure may be determined based on performance of that component of that user terminal during a preceding time period. By determining the performance measure based on actual performance of that component when in use in that particular user terminal, an accurate determination of whether performance of that component is, for example, worsening over time may be obtained without having to compile and use historical data obtained from a large number of different terminals and components. Thus, the method can, for example, be implemented even for new types or models of components that have not been used previously in that context. The method can also be implemented without requiring access to large amounts of historical performance data, or the results of processing such data, which may be difficult or time-consuming to acquire, or proprietary.

The determining of the performance measure may comprise updating the performance measure based on performance of the component during the preceding time period. The updating of the performance measure may comprise assigning a value to the performance measure for the first time.

The monitoring of performance may comprise obtaining and/or monitoring a performance parameter, and the performance measure is may be determined based on measured values of the performance parameter for that component of that user terminal during the preceding time period.

The monitoring of performance may comprise determining a value of the performance parameter in respect of a further time period, wherein optionally the further time period comprise a rolling time window determined with respect to the current time.

The performance parameter may comprise at least one of a time to perform an operation, an error rate or a number or proportion of errors. The operation may comprise part or of a larger operation, for example a sub-operation forming part of a larger operation comprising several sub-operations.

The preceding time period may comprise a time period determined relative to a time of installation, servicing or first usage of the component or user terminal.

Thus, the value of the performance measure may represent performance before any significant degradation of the component through use.

The component may comprise a hardware component, for example a mechanical or electro-mechanical component. The component may comprise at least one of a cash dispenser, card reader, camera, communication device, user input device, display screen or other display device, PIN entry pad, keypad, or key.

The monitoring of performance of the component may comprise monitoring the time take to perform an operation or respond to a command.

The operation may comprise at least one of card reading, card ejection, card writing, card retaining or retracting, cash dispensing, open shutter operation, close shutter operation, cash retraction.

The monitoring of performance of the component may comprise monitoring error messages. The measure may comprise a frequency or number of error messages.

The monitoring of performance of the component may comprise monitoring response of the component to commands. The commands may comprise execution commands. The response may comprise issue of a completion message or an error message.

The monitoring of performance of the component may comprise monitoring status of the device, for example by querying status of the device and/or receiving a status message. The monitoring of performance of the component may comprise monitoring messages sent to or from the component, for example messages sent between the component and an application of the user terminal.

The monitoring may comprise monitoring messages using a monitoring component separate from user terminal application. The monitoring component may be a middleware component.

The monitoring of performance may comprise monitoring messages via a device interface, optionally an XFS interface. The monitoring of performance may comprise monitoring XFS data.

The performance measure may comprise at least one threshold.

The performance measure may comprise a threshold time to perform an operation. The threshold may comprise an average or expected duration.

The performance measure may comprise a threshold number, rate or proportion of errors.

The comparing of the monitored component performance to the performance measure may comprise calculating a percentage or proportion of the performance measure represented by the monitored component performance.

The method may comprise generating monitoring data representative of the comparison, for example a performance status signal representative of the comparison, wherein optionally the performance status signal represents an amount of divergence of the monitored component performance from the performance measure.

The method may comprise updating the performance measure in dependence on performance of the component during a preceding time period, for example average, median and/or peak performance during the preceding time period. The preceding time period may comprise a rolling time window.

The method may comprise monitoring performance of components of at least one user terminal, optionally a plurality of user terminals, and providing the monitoring data to a remote monitoring system.

The remote monitoring system may be configured to determine at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

In a further, independent aspect of the invention there is provided a monitoring method comprising receiving monitoring data from a plurality of user terminals, the monitoring data being representative of performance of at least one component of the user terminals, and the method comprising determining at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

For each component of each user terminal, the monitoring data may represent performance of that component relative to a respective performance measure determined based on performance of that component of that user terminal during a preceding time period.

The method may comprise providing an operator interface for displaying the results of the monitoring of performance.

The method may comprise determining a ranking of the performance of components and/or user terminals. The ranked components may be components of the same type installed in different user terminals. The method may comprise displaying the ranking, for example via the operator interface.

The method may comprise determining a combined ranking. Each entry in the combined ranking may represent performance of a respective plurality of components. Each entry in the combined ranking may represent performance of a plurality of components of a respective, different user terminal.

The or each ranking may represent a ranking of importance or urgency of maintenance or replacement of one or more components.

The determining of at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data, may comprise determining a target number of components or user terminals to maintain or replace, and selecting components or user terminals for maintenance or replacement based on the ranking and on the target number.

The method may comprise selecting components or user terminals in order based on the ranking until the target number is reached.

The target number may be determined based on an expected and/or historical failure rate.

The ranking may comprise a ranking of divergence of monitored performance from the or a performance measure.

The method may comprise categorising the performance of different components and/or user terminals into a plurality of categories.

The method may comprise displaying the ranking, combined ranking or categorisation using the operator interface.

The method may comprise displaying at least one marker representative of the location of a user terminal on a map view, the appearance of the marker being representative of the performance of the user terminal or at least one component of the user terminal. The determining of the at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement may be based on user input, for example user input provided via the operator interface.

In another independent aspect of the invention there is provided a system for monitoring performance of a user terminal, the system comprising a processing resource configured to monitor performance of a component of the user terminal and to compare the monitored component performance to a performance measure. The processing resource may be configured to determine performance status in dependence on the comparison. The processing resource may be configured to determine whether the component and/or user terminal should be subject to maintenance and/or replacement in dependence on the comparison. The performance measure may be determined by the processing resource based on performance of that component of that user terminal during a preceding time period

The processing resource may be configured to determine whether the component and/or user terminal should be subject to maintenance and/or replacement based on the comparison.

The determining of the performance measure may comprise updating the performance measure based on performance of the component during the preceding time period.

The updating of the performance measure may comprise assigning a value to the performance measure for the first time.

The monitoring of performance may comprise monitoring a performance parameter, and the performance measure may be determined based on measured values of the performance parameter for that component of that user terminal during the preceding time period.

The performance parameter may comprise at least one of a time to perform an operation, an error rate or a number or proportion of errors.

The preceding time period may comprise a time period determined relative to a time of installation, servicing or first usage of the component or user terminal.

The performance may comprise at least one of average, median or peak performance.

The processing resource may be configured to determine a performance status based on the comparison.

The processing resource may be configured to generate a maintenance signal based on the comparison. The maintenance signal may indicate that the terminal or component should be subject to maintenance or replacement. The component may comprise a hardware component, for example a mechanical or electro-mechanical component.

The component may comprise at least one of a cash dispenser, card reader, camera, communication device, user input device, display screen or other display device, PIN entry pad, keypad.

The monitoring of performance of the component may comprise monitoring the time taken to perform an operation or respond to a command.

The operation may comprise at least one of card reading, card ejection, card writing, card retraining or retracting, cash dispensing, open shutter operation, close shutter operation, cash retraction.

The monitoring of performance of the component may comprise monitoring error messages.

The measure may comprise a frequency or number of error messages.

The monitoring of performance of the component may comprise monitoring response of the component to commands, and the response may comprise issue of an error message.

The monitoring of performance of the component may comprise monitoring status of the device, for example by querying status of the device and/or receiving a status message.

The monitoring of performance of the component may comprise monitoring messages sent to or from the component, for example messages sent between the component and an application of the user terminal.

The monitoring of performance may comprise monitoring messages by the processing resource via a device interface, optionally an XFS interface.

The monitoring of performance may comprise monitoring XFS data by the processing resource.

The performance measure may comprise at least one threshold.

The performance measure may comprise a threshold time to perform an operation, or a threshold number, rate or proportion of errors.

The comparing of the monitored component performance to the performance measure may comprise calculating a percentage or proportion of the performance measure represented by the monitored component performance.

The processing resource may be configured to generate monitoring data comprising a performance status signal representative of the comparison, wherein optionally the performance status signal represents an amount of divergence of the monitored component performance from the performance measure. The amount of divergence may comprise, for example a difference, percentage or proportion. The processing resource may be configured to provide monitoring data to a remote monitoring system.

In a further independent aspect of the invention is provided a monitoring system for receiving monitoring data from a plurality of user terminals, the monitoring data being representative of performance of at least one component of the user terminals, and the monitoring system being configured to determine at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data.

The monitoring system may comprise a processing resource configured to provide an operator interface for displaying the results of the monitoring of performance.

The processing resource may be configured to determine a ranking of the performance of components and/or user terminals.

The ranking may comprise a combined ranking, and each entry in the combined ranking may represent performance of a respective plurality of components of a respective, different user terminal.

The or each ranking may represent a ranking of importance or urgency of maintenance or replacement of one or more components.

The determining of at least one component or user terminal of the plurality of user terminals that requires maintenance or replacement based on the monitoring data, may comprise determining a target number of components or user terminals to maintain or replace, and selecting components or user terminals for maintenance or replacement based on the ranking and on the target number.

The processing resource may be configured to select components or user terminals in order based on the ranking until the target number is reached.

The target number may be determined based on an expected and/or historical failure rate.

The ranking may comprise a ranking of divergence of monitored performance from the or a performance measure.

The processing resource may be configured to categorise the performance of different components and/or user terminals into a plurality of categories.

The processing resource may be configured to display the ranking, combined ranking or categorisation using the operator interface.

The processing resource may be configured to display at least one marker representative of the location of a user terminal on a map view, the appearance of the marker being representative of the performance or ranking of the user terminal or at least one component of the user terminal. The determining of the at least ore component or user terminal of the plurality of user terminals that requires maintenance or replacement may be further based on user input, for example user input provided via the operator interface.

In another independent aspect of the invention there is provided a computer program product comprising computer readable instructions that are executable to perform a method as described herein.

There may also be provided an apparatus or method substantially as described herein with reference to the accompanying drawings.

Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. For example, apparatus features may be applied to method features and vice versa.

Detailed description of embodiments Embodiments of the invention are now described, by way of non-limiting example, and are illustrated in the following figures, in which:-

Figure 1 is a schematic illustration of a user terminal according to an embodiment;

Figure 2 is a flow chart illustrating in overview a user terminal performance monitoring process performed by the user terminal of Figure 1 ;

Figure 3 is a schematic illustration of a user terminal monitoring system that includes the user terminal of Figure 1 ;

Figure 4 is a schematic illustration of a monitoring application according to an embodiment; and

Figure 5 is an illustration of performance monitoring data displayed by the application of Figure 4.

Figure 1 shows a user terminal 2 in accordance with an embodiment. The user terminal 2 includes a processor 4 connected to a data store 6. The processor 4 is also connected to an encrypting pin pad (EPP) 8, a card reader device 10, a display device 12, a printer 14, and a cash dispenser and cash handling mechanism 13 linked to a cash store 15. The processor 4 is also connected to user input keys 11 via which a user can select options or provide other input. The user input keys 1 1 are represented schematically by a single block in Figure 1 , but in this case there are six keys arranged around a screen of the display device 12. In alternative embodiments any suitable number of keys can be provided, and/or the keys can be soft keys displayed on the screen of the display device 12.

In the embodiment of Figure 1 , the processor comprises a Windows PC core. The data store 6 comprises a hard disk, the card reader device 10 is an Omron V2BF-01JS-AP1 card reader, the display device 12 comprises a touchscreen display and the printer 14 is an Epson M-T532, MB520. The EPP 8 comprises a PCI- compliant number pad and is operable to securely receive a PIN entered by a user.

The processor functions as a controller for controlling a user interaction process of the user terminal 2. Any other suitable controller, for example any suitable hardware, software or combination thereof may be used in alternative embodiments.

Although particular component types and models are included in the embodiment of Figure 1 , any suitable component types and models may be used in alternative embodiments. The display device 12 in alternative embodiments may comprise any suitable type of screen for displaying content, for example images and/or text to a user. The display may comprise, for example, an LED screen, a screen of a cathode ray tube device, or a plasma screen.

The user terminal 2 also includes a communication interface 16 that is configured to enable the user terminal to transmit messages to and receive messages from a server 18 associated with the user terminal network operator responsible for installation and operation of the user terminal 2. The messages are transmitted and received via a secure network connection in accordance with known banking protocols.

The user terminal network operator may be a financial institution, for example a bank. The messages sent between the user terminal 2 and the server 18 may relate to a particular transaction, and may comprise for example authorisation messages or messages comprising instructions to credit or debit an account in relation to a transaction conducted by a user using user terminal 2. In addition, the server 18 can send software installation or update messages that comprise software components for automatic installation at the user terminal 2. The user terminal 2 is also able to send management information to the server 18, comprising for example data representing usage of the user terminal during a particular period, or fault monitoring data.

In operation, the processor 4 controls operation of the other components of the user terminal 2, under control of a user terminal application 30 running on the processor. Upon power-up of the user terminal 2 a basic input-output system (BIOS) is booted from non-volatile storage (not shown) included in the processor 4, and a Windows 7 operating system and application components are installed from the data store 6 by the processor 4 to form a user terminal processing system.

The user terminal application 30 forms part of an application layer and is provided under an XFS-compatible application environment, which may be a hardware-agnostic application environment such as KAL Kalignite or a manufacturer- specific application environment.

The software architecture of the user terminal 2 includes various other layers, in accordance with known ATM-type cevice architectures, including a hardware device layer that includes various hardware-specific drivers for controlling operation of the various hardware components of the user terminal 2, and an XFS layer 32 that includes XFS interfaces that mediate between the application layer and the hardware device layer. In this case the XFS laye- is a CEN XFS layer. CEN XFS is a high performance journaling file system and is the accepted open XFS standards-based system for automated teller and branch applications/automation. CEN XFS provides a client-server architecture for financial applications on the Microsoft Windows platform, especially peripheral devices such as ATMs.

It is a feature of the embodiment of Figure 1 that the processor, in operation, includes a processing resource comprising a further software component in the form of monitoring unit 34 that is operable to monitor performance of components of the user terminal, based at least in part on data received via the XFS layer. The monitoring unit 34 is able to use extracted data available through CEN XFS interfaces and, for example, to monitor timings, command counters, device error codes and device status information. Operation of the monitoring unit 34 is described in more detail below.

In operation, the user terminal application 30 controls operation of the user terminal 2, including operations associated with performance of a financial transaction by a user such as, for example, reading of the user's card, reading of a user's PIN, receipt and processing of a user's data such as account balance, overdraft limit and withdrawal limit from server 18, and display of a sequence of screen content on the display device 12. The application 30 also controls communication with the server 18, and the processing of data associated with a transaction, including user data received from the server 18. The application 30 also controls the display of transaction screen content on the display device 12, including selecting and outputting the appropriate transaction screen content for a particular stage in a transaction process. The application module 30 controls, via the XFS layer 32, interaction with and operation of different hardware devices of the device, for example, the EPP 8, card reader device 10, printer 14, user input keys 11 , and cash dispenser and cash handling mechanism 13. In operation, the user terminal application 30 controls operation of the various hardware and other components by sending control messages to those components, usually via the XFS layer 32. Response messages and status messages are also sent from the various components to the application 30, again usually via the XFS layer 32. Messages sent from the various user terminal components to the application 30 include messages that represent the status of the component, a confirmation that a requested action has been performed, or messages that indicate that an action has not been completed or an error has occurred.

The processes performed by application 30 depend on the messages received from the various components. For example, in response to receiving a message that an action had not been completed, the application 30 may resend the command to perform the action.

It is a feature of the embodiment of Figure 1 that a further component, in the form of monitoring unit 34, monitors the messages sent from and to the XFS layer. The monitoring unit 34 determines from the messages a level of performance of one or more components and, in some modes of operation, generates a performance flag or other performance status signal if a particular component is not performing to a predetermined or desired level.

The monitoring unit 34 is able to monitor performance of any component, whether hardware or software, but it has been found to be particularly useful to monitor components that include mechanical or electromechanical parts, as such components can be more prone to breakdown, or degradation of performance or can be more likely to require replacement or maintenance.

The monitoring unit 34 is able to monitor the timings of commands and events listed in the CEN XFS specification that signal the start or the end of a hardware operation. For example, the monitoring unit 34 is able to monitor the timings of commands and events which signal the start or the end of an operation that requires motors of the cash dispensing mechanism or card reading mechanisms to spin.

The monitoring of performance of operations relating to a card reading operation is now considered, by way of example, with reference to the flow chart of Figure 2.

A card is inserted into the card slot of the user terminal 2. Sensors within the card reading mechanism detect the presence of the card and a signal is sent from the card reader device 10 to the application 30 indicating that the card is present. The monitoring unit 34 monitors messages sent via the XFS layer 32, either by querying the application 30 or querying the XFS interfaces and/or hardware devices, and also thus receives an indication that a card is present. In response to receipt of the message, the application 38 at stage 50 sends a command to the card reader device 10 to begin a card reading operation. Again, the monitoring unit 34 monitors the sending of that command to the card reader device 10.

At stage 52, the monitoring unit 34 continues to monitor messages sent to the application 30 by the card reader device 10 in order to monitor progress of the card reading operation. In the present case, the monitoring unit 34 monitors for the time when the card reading operation ends, and determines the time taken for the card reading operation, from the difference in time between the card being inserted to the end of the card reading operation.

At the next stage 54, the monitoring unit 34 compares the time taken for the card reading operation to a performance measure in the form of a stored threshold duration. If the time taken is greater than the stored threshold duration then, at the next stage 56, the monitoring unit 34 generates a flag or other performance status signal indicating that the performance time has been slow. The performance measure may be referred to as a performance metric in some embodiments. The time taken for the card reading operation in this case is the time taken for the last card reading operation. In other embodiments, the time taken for the card reading operation is the average (or other statistical measure) of the times taken for card reading operation during a time period, for example a rolling time window determined with respect to the current time. In other embodiments, the time taken for the card reading operation is the average (or other statistical measure) of the last 10 (or other selected and/or predetermined number) of card reading operations.

In another mode of operation, the monitoring unit compares the time taken for the card reading operation to the stored threshold duration, and calculates what percentage or proportion of the stored threshold duration is the time taken for the card reading operation. The performance status signal in that case may be the calculated proportion or percentage.

In one mode of operation, the monitoring unit 34 stores data representing the number and/or frequency of occasions on which performance of a card reading operation has been slow and, if the number and/or frequency of slow card reading operations exceeds a threshold or other measure, then the monitoring unit 34 generates a maintenance signal indicating that maintenance or replacement of the card reader device 10 may be required.

In another mode of operation, the monitoring unit 34 stores the performance data for each operation of interest, in this case duration of card reading operation, and subsequently generates a flag or other performance status signal based on analysis of the performance data. For example, the monitoring unit may determine the average, median, or maximum duration of card reading operations during a preceding time window, compare that average, median or maximum duration to a threshold, and generate a flag or other performance status signal in dependence on the comparison.

In another mode of operation, the performance data obtained by the monitoring unit 34 (for example the flag or other performance status signal indicating that performance time has been slow, or the raw time data or other performance data) is transmitted from the user terminal 2 to the user terminal server 18 or other remote location, and the processing of the data to compare performance to a performance measure and/or to determine component status (for example whether maintenance, replacement or other intervention is required) is performed at that remote location.

In some modes of operation, as described in more detail below, performance status signals, or the raw time data or other performance data, is received at the user terminal server 18 from multiple user terminals, and the user terminal server is configured to determine user terminals or components that most require maintenance or replacement based on a ranking or other comparison of the received performance status signals, raw time data or other performance data.

In the example described above with reference to Figure 2, the monitoring unit 34 monitored the time between a card being inserted and the time when a card read operation ended in order to determine performance of the card reader device 10. The monitoring unit 34 is not limited to monitoring that device, or that performance measure and is able to monitor performance of any suitable component of the user terminal 2 by monitoring any suitable variable.

For example, in relation to card reader device 10 performance, the monitoring unit is also able to monitor the time from when the card write command starts to the time when the card write operation has ended, the time from when the card eject command starts to the time when the card eject operation has ended, or the time from when the card retain/retract command starts to the time when the card retain/retract operation has ended.

In relation to cash dispenser performance, the monitoring unit 34 is able to monitor, for example:- the time from when the cash is picked from a cash cassette to the time when the cash is stacked and ready to be presented; the time from when a present command starts to the time when cash has been presented to a user; the time from when an open shutter command starts to the time when the open shutter operation has ended; the time from when the close shutter command starts to the time when the close shutter operation has ended; or the time from when a cash retract command starts to the time when the cash retract operation has ended.

In addition to monitoring timings, the monitoring unit 34 is also able to count all execute commands and to count all successful completion and all error codes that happen as part of an execution arising from the execute commands.

The monitoring unit 34 is also able to record additional information about unusual situations such as high CPU usage, high memory usage, high system handle usage.

The monitoring unit 34 is also able to monitor all generic and hardware vendor specific error codes through the XFS interface from, for example, the following CEN XFS data:

a) Execute command completion code.

b) Execute command output data (this can include hardware vendor specific data)

c) IpbDescription field of WFS_ERR_HARDWARE_ERROR events (this can include hardware vendor specific data)

d) IpszExtra field of WFS_INF_xxx_STATUS command (this can include hardware vendor specific data).

The monitoring unit 34 is also able to monitor all generic and hardware vendor specific device status information available, for example, using the following CEN XFS data:

a) Information command output data, for example using WFS_INF_xxx_STATUS command data (this can include hardware vendor specific data)

b) Execute command output data (this can include hardware vendor specific data)

c) Event output data (this can include hardware vendor specific data). The monitoring unit 34 is able to compare monitored time to perform an operation to a threshold time in order to determine a status of a component, when the variable being monitored is operation performance time. The status may for example be represented as, or determined based upon, a percentage or proportion of the threshold time, or by a determination of whether the time exceeds the threshold. Thus the status may represent, for example, duration of an operation or procedure compared to expected duration. In the case of other monitored variables, any other suitable performance measure can be used for comparison purposes. For example, a threshold number or rate of errors may be used as the performance measure. The status may for example be represented as, or determined based upon, the number of errors as a percentage or proportion of a threshold number or rate of errors, or by a determination of whether the number or rate or errors exceeds the threshold.

Any other suitable performance parameters may be monitored and compared to a corresponding performance measure in other embodiments.

The value of a threshold or other performance measure can be fixed or predetermined in some embodiments or modes of operation. For example, in the monitoring of the card reading operation described in relation to Figure 2, the threshold time for a card reading operation has a fixed, pre-determined value. In other embodiments, the value of the threshold or other performance measure for a particular user terminal is updated initially and/or regularly, for example in dependence on measured values of the parameter in question for that particular user terminal, during a preceding period.

For example, in one such embodiment, the value of the threshold duration for a card reading operation is calculated as being the average of the times taken for the preceding 50 card reading operations plus an offset duration. Thus, the value of the threshold may vary over time. The comparison of the duration of a card reading operation to the threshold thus gives an indication as to whether that card reading operation has been significantly slower than preceding card reading operations for that device. That can give an indication as to whether the performance of the card reader device 10 is beginning to worsen, which can provide an early indication that replacement or maintenance may be required.

In some embodiments, the value of the threshold duration or other performance measure is updated only once, based upon performance of the component in question during a time period, and that value is used during subsequent monitoring of the performance of the component.

The time period may be a time period determined relative to a time of installation, servicing or first usage of the component or user terminal. For example, the value of the threshold duration, error rate or other performance measure may be determined during a time period immediately following the installation, first usage or servicing of the component or the user terminal (or during a time period an offset time after that, to allow for a "running-in" of the component or terminal). That value of the threshold duration, or other performance measure, may then be used for subsequent monitoring and comparison with current performance. Thus, the value of the threshold duration or other performance measure may represent performance before any significant degradation of the component through use.

In some cases, an initial default value is assigned to the threshold duration, or other performance measure, which is then updated with the new value determined during the time period for example following the installation or servicing of the component or the user terminal. In other cases, no initial default value is assigned to the threshold duration, or other performance measure, and the updating of the threshold duration, or other performance measure, comprises assigning a value to the threshold duration, or other performance measure, for the first time.

Although in the embodiment of Figure 1 , the monitoring unit 34 is a software component that is separate from the application 30, in alternative embodiments the application 30 may perform the function of the monitoring unit 34 and thus the monitoring unit 34 may be considered to form part of the application 30. In other embodiments, the monitoring unit 34 may comprise a plurality of separate software or hardware components.

The monitoring unit 34 is operable to transmit monitoring data to the user terminal network server for display or further processing. The terminal 2 forms part of a wider user terminal network as illustrated schematically in Figure 3, which shows three user terminals that are arranged to communicate with the user terminal network server 18. Only three user terminals 2 are shown in Figure 3 for clarity, but in practice a user terminal network may comprise many tens, hundreds or thousands of user terminals.

The network server 18 in this embodiment is a server of a user terminal network operator, responsible for installation and operation of the user terminals. The network server 18 is operable to transmit messages to, and receive messages from, the user terminals 2 via a secure network connection in accordance with known banking protocols.

The user terminal network operator may be a financial institution, for example a bank, and the user terminal network server may form part of a processing centre of the financial institution as shown in Figure 1. The messages sent between the user terminal and the server may relate to a particular transaction, and may comprise for example authorisation messages or messages comprising instructions to credit or debit an account in relation to a transaction conducted by a user using the user terminal. In addition, the server can send software installation or update messages that comprise software components for automatic installation at the user terminals, as will be described in more detail below. The user terminals are also able to send management information to the server, comprising for example data representing usage of the user terminal during a particular period, or fault data.

In the embodiment of Figure 3, monitoring data obtained by the user terminal network server 18 is extracted by a monitoring unit 36 at the server 18 and provided to a monitoring application 38 for output to a maintenance operator at an operator terminal 60, to assist the maintenance operator in determining which components of which user terminals may require attention, for example maintenance or replacement. The monitoring unit 36 and the monitoring application form a processing resource.

The monitoring application 60 is illustrated in more detail in Figure 4. In this embodiment, the monitoring application 60 is hosted by a computer at the site of a hardware maintenance company, and the application 60 is executable via operator terminals operating in a client/server relationship with the host computer. The monitoring application 60 in alternative embodiments may be installed on any suitable computer, or maybe in the form of dedicated hardware or a mixture of hardware and software. In some embodiments, the monitoring application 60 is installed at the user terminal network server 18.

As shown in Figure 4, the monitoring application 38 includes a monitoring module 72 that is operable to perform analysis or other processing operations on received user terminal component monitoring data. The application 38 also includes an operator interface module 76 that is configured to provide a maintenance interface that presents to the operator information derived from the monitoring data. The application 38 also comprises or has access to received monitoring data 74 that is stored either locally or remotely. The monitoring application 38 includes a communication module 70 for managing communication with the server 18 and enabling receipt of the monitoring data, and for communicating with the operator terminal 40 via any suitable communication path.

In the embodiment of Figure 4, the monitoring application 38 receives monitoring date, for example flags or other performance status signals, from the monitoring units 34 of each of the user terminals 2 in the network, with the flags or other performance status signals indicating each instance when a particular component has not performed in compliance with a threshold or other performance measure. In alternative embodiments, the monitoring application 38 receives monitored data from the monitoring units and performs comparisons to the thresholds or other performance measures itself. Thus, the bulk of processing of the monitoring data can be performed either locally at the user terminals 12 or remotely by the monitoring application 38.

In some embodiments, the monitoring application 38 receives monitoring data, for example raw data, partially processed data, flags or other performance status signals, from the user terminals 2 and compares the monitoring data, either before or after further processing, to determine the components or user terminals that are most in need of maintenance or replacement. . The monitoring application 38 is operable to perform further processing of the received data and to present the data to an operator in any desired format.

In one embodiment, the application 38 is operable to rank user terminals and components of those user terminals in order of likelihood that maintenance or replacement of the user terminal or component is required or will soon be required. The likelihood that maintenance or replacement will be required is determined by the application 38 using any suitable methoc, for example by determining the number or rate of instances of a component of a particular user terminal not matching the performance threshold or other performance measure.

For example, returning to the example of figure 2, the application 38 is operable to rank the user terminals of the network according to the number of times, or the rate at which, the card reading device 10 does not read a card within the threshold card reading duration. The resulting rankings can be presented to the operator in any suitable format. It will be understood that the threshold reading duration for each component may be specific to that component, and may be determined based on actual performance of that component in situ in that user terminal during a preceding time period.

In another mode of operation, the application 38 ranks each user terminal according to the percentage or proportion of the respective threshold time for that terminal represented by the measured current card reading time (or other time for performing an operation) for that terminal. Thus, a terminal for which the measured current time (for example current average time) for performing a card reading operation is 90% of the threshold time for that terminal will be ranked more highly and will be less likely to be subject to maintenance or component replacement than a terminal for which the measured current time (for example current average time) for performing a card reading operation is 60% of the threshold time for that terminal.

It can be understood that the threshold times for the different terminals may be different, and the ranking can represent a ranking of the relative amount of divergence from the expected performance level for the terminals in question, based upon previous actual performance levels obtained for those specific terminals.

In another mode of operation, all of the user terminals are listed in order of the number of times, or rate at which, card reading device 10 does not read a card within the threshold card reading duration.

In another mode of operation, the application 38 selects those user terminals 2 for which the number of times, or rate at which, card reading device 0 does not read a card within the threshold card reading duration, is above a predetermined threshold, and displays a list of those selected user terminals. In yet another mode of operation, the application 38 is configured to divide the card reader device results into bands, for example unacceptable, satisfactory and good depending on the time take for card reading operations. The results can then be displayed in any suitable fashion to highlight the bands, for example by displaying results in different colours.

A display screen provided by the application 38 to an operator via operator terminal 60 is shown in Figure 5 by way of example. In this case, the application 38 determines the percentage of card reading operations that were not performed within threshold durations during a preceding time window, for example two weeks. The application 38 displays the results as a list, with the percentage of slow card reading operations displayed for each user terminal ID. The application 38 also divides the results into three bands, the first band having a percentage of slow reading operations above 50%, the second band having percentage of slow reading operations between 20% and 50%, and the third band having a percentage of slow reading operations up to 20%. In this case, the operator would prioritise user terminal Z1343 for maintenance, and may also consider that user terminals Y56434 and LK1234 require maintenance either immediately or shortly.

The operator is able to drill down and view raw data underlying the presented figures, and to view any suitable combination of processed data, for example using drop-down menus and clickable links.

In the example of Figure 5, the user terminals are ranked on a display provided by the operator interface based on the performance of the card reading device 10, and in particular based on the time taken for each card reading operation. The application 38 is also able to disp.ay combined rankings in which the user terminals are ranked according to a combined measure of performance of more than one aspect of a component, in this case the card reader device 10, or according to a combined measure of performance of more than one component of the user terminal.

For example, in the case of the ca r d reader device 10, the user terminals can for example be ranked using a combined ranking based on the times taken for card reading operations, and on the number of error messages received from the card reader device 10.

In another case, the user terminals can be ranked by the application 38 using a combined ranking based on the performance of the card reader and the cash dispenser. In other embodiments, any suitable combination of components or monitored performances can be used to determine the combined ranking.

The operator is able to select the cata that is displayed, for example rankings or combined rankings for any particular combination of user terminal components and/or user terminal component performance measures. The application 38 also enables the operator to drill down and view processed or unprocessed performance data for any particular user terminal or component.

The rankings can be used, either by an operator, or automatically, to select terminals or components that are to be subject to maintenance or replacement. For example, all terminals with performance of a component or components below a specified level may be selected for maintenance or component replacement. Alternatively, a target number of components or terminals may be determined, and then components or terminals may be selected for maintenance or component replacement, for example based upon the rankings or combined rankings, until the target number is reached.

By monitoring component performance it may be possible to select terminals or components for maintenance or component replacement before an actual failure occurs, thus improving efficiency and maintaining customer service. However, if too many terminals or components are selected for maintenance or replacement then the burden on the user terminal network operator can increase to an undesirable degree. Therefore, it can be important to select a suitable target number of components or terminals for replacement or maintenance.

In some embodiments, the target number of components or terminals for maintenance or component replacement may be selected based on an expected or historical failure rate, for example numbers of terminals or components that have failed in previous years or other time periods. By then selecting that target number of components or terminals for maintenance or component replacement based upon the rankings or other performance measure, it may be possible to reduce the number of failures that occur whilst not increasing the amount of maintenance or replacement activities unnecessarily.

As some failures of components or terminals may occur without a preceding drop off in monitored component performance, in some embodiments the target number may be set to be a half or some other proportion of the expected failure rate based on historical data. Thus, some failures will still be expected to occur, but the number of failures will be reduced without increasing the maintenance or replacement burden excessively.

In one mode of operation, the application 38 is configured to display a map view to the operator that shows all user terminals 2 in a particular geographical area. The user terminals are represented by markers on the map view, and the markers can, for example, be colour coded or otherwise distinguished in dependence on a ranking of the user terminals based on performance measures of one or more of the components of the user terminals. For example, in the case of the data shown in Figure 5, the marker for user terminal Z1343 may be displayed on the map view in red indicating that immediate maintenance is required, the markers for user terminals Y56434 and LK1234 may be displayed on the map view in orange indicating that maintenance is required either now or shortly, and the markers for the other user terminals may be displayed in green indicating that no immediate maintenance is required. The use of a map view can be helpful to the operator in planning a maintenance schedule for a maintenance engineer, and enables the operator to schedule visits to user terminals that require maintenance in an efficient manner than reduces or minimises travel time for the maintenance engineer.

User terminals are also subject to regular visits by routine servicing personnel operators to replenish cash and/or paper supplies. Such personnel can, in some cases, also perform certain maintenance tasks in relation to certain hardware components of user terminals or replace some such hardware components, without requiring the presence of a dedicated hardware engineer. The operator can, if necessary, request the personnel perform certain such maintenance or replacement tasks based on the data presented on the user terminal by the monitoring application. By replacing or maintaining components at an early stage based on the early indications of a problem that can be detected by the embodiment of Figures 1 and 4, it may be possible to resolve such problems without requiring the presence of a dedicated hardware engineer. It can also be possible to ensure maintenance or replacement of components at an optimum time, avoiding breakdown of user terminals whilst also ensuring that components are not replaced unnecessarily when they are functioning well.

The monitored performance that is compared to the performance measure may be the last monitored performance or in some embodiments may be the average (or other statistical measure) of monitored performance during a time period, for example a rolling time window determined with respect to the current time. In other embodiments, the monitored performance is the average (or other statistical measure) of the monitored performance for a selected and/or predetermined number of operations of the component in question, for example the last 5, 10, 20 or other number of operations.

Whilst features of embodiments described herein implement certain functionality by means of software, that functionality could equally be implemented solely in hardware (for example by means of one or more ASICs (application specific integrated circuit)) or indeed by a mix of hardware and software. The processing resource(s) described herein may be implemented as software, hardware or a mixture of software and hardware in various embodiments. Components of the processing resource(s) may be provided in a single location, for example as or within a single processor or server, or may be distributed across different locations.

It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.

Each feature disclosed in the description, or drawings, may be provided independently or in any appropriate combination.