Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD FOR ENABLING A FIRST ELECTRONIC DEVICE TO MONITOR A PROGRAM IN A DEVELOPMENT ENVIRONMENT AND RELATED FIRST ELECTRONIC DEVICE
Document Type and Number:
WIPO Patent Application WO/2024/042030
Kind Code:
A1
Abstract:
A method performed by a first electronic device is disclosed. The method comprises monitoring a program in a developer environment of the first electronic device. Monitoring the program comprises obtaining, from a second electronic device configured for performance monitoring, performance data indicative of one or more performance parameters associated with the monitored program. The method comprises providing, based on the performance data, an output.

Inventors:
KUMAR O SANDEEP (IN)
Application Number:
PCT/EP2023/072921
Publication Date:
February 29, 2024
Filing Date:
August 21, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAERSK AS (DK)
International Classes:
G06F11/34; G06F11/36
Foreign References:
US10635566B12020-04-28
US11392844B12022-07-19
Attorney, Agent or Firm:
AERA A/S (DK)
Download PDF:
Claims:
CLAIMS

1 . A method, performed by a first electronic device, the method comprising: monitoring (S104) a program in a developer environment of the first electronic device; wherein monitoring (S104) the program comprises obtaining (S104A), from a second electronic device configured for performance monitoring, performance data indicative of one or more performance parameters associated with the monitored program; and providing (S108), based on the performance data, an output.

2. The method according to claim 1 , wherein monitoring (S104) the program comprises: monitoring (S104B) one or more operations of the program; wherein the performance data is indicative of one or more performance parameters associated with the one or more operations of the monitored program.

3. The method according to claim 2, wherein the method comprises generating (S106), based on the performance data, a risk score associated with the one or more operations, wherein the risk score is indicative of a risk of impacting a performance parameter of the one or more operations upon a change in the one or more operations.

4. The method according to any of claims 2-3, wherein the output comprises the risk score associated with the one or more operations.

5. The method according to any of the previous claims, wherein obtaining (S104A) the performance data from the second electronic device comprises generating (S104AA) the performance data by querying a repository. The method according to claim 5, wherein the repository comprises one or more of: a library of tools for deriving one or more performance parameters, one or more historical performance parameters, one or more learned performance parameters, and a repository risk score associated with a corresponding operation. The method according to claims 3 and 6, wherein the one or more learned performance parameters are based on the one or more historical performance parameters and/or the repository risk score. The method of any of claims 6-7, wherein generating (S106) the risk score comprises generating (S106A) the risk score associated with the one or more operations, based on the one or more historical performance parameters and/or the one or more learned performance parameters. The method according to any of the previous claims, the method comprising: performing (S102) a handshake between the first and the second electronic device. The method according to any of the previous claims, wherein the first electronic device comprises a display device; wherein providing (S108) the output comprises displaying (S108A), on the display device, a user interface object representative of the output. The method according to any of claims 3 and 10, wherein the user interface object indicates, to a user, a recommendation based on the risk score upon the change in the one or more operations. The method according to any of the previous claims, wherein providing (S108) the output comprises transmitting (S108B) the output to an external device. The method according to any of the previous claims, wherein the one or more performance parameters comprise one or more of: a response time, a throughput, and a resource utilisation parameter. The method according to any of the previous claims, wherein the performance data comprise the one or more performance parameters. The method according to any of the previous claims, wherein the performance data comprise real time performance data. An electronic device comprising a memory, a processor, and an interface, wherein the electronic device is configured to perform any of the methods according to claims 1-15. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device cause the electronic device to perform any of the methods of claims 1-15.

Description:
A METHOD FOR ENABLING A FIRST ELECTRONIC DEVICE TO MONITOR A PROGRAM IN A DEVELOPMENT ENVIRONMENT AND RELATED FIRST ELECTRONIC DEVICE

The present disclosure pertains to the field of information technology. The present disclosure relates to a method for enabling a first electronic device to monitor a program in a development environment, and a related first electronic device.

BACKGROUND

Success of a software product depends on its ability to be reliable (e.g., functionally behaving as per expectations, fault tolerant, performant and/or scalable). For example, software testing plays an important role in both achieving and evaluating software (e.g., code) capable of delivering a high quality and performant software product. However, the aspects tested are mostly static.

SUMMARY

A Continuous Integration and Continuous Delivery/Deployment (CI/CD) pipeline can overcome some of the limitations of manual testing (e.g., manual testing can be timeconsuming and more prone to errors since it is performed by humans). In other words, a CI/CD pipeline may provide a developer with feedback associated with, for example, quality issues, upon a change in a code used to develop the software-based product. However, the feedback lacks in informing the developer about the impact of making such change in the code in terms of performance.

There is a need for real-time monitoring and improvement proposal to a developer, which can provide real-time performance impact of a change in a code.

Accordingly, there is a need for a method and an electronic device, which may mitigate, alleviate, or address the shortcomings existing and may provide for real-time code performance recommendations.

Disclosed is a method performed by a first electronic device. The method comprises monitoring a program in a developer environment of the first electronic device. Monitoring the program comprises obtaining, from a second electronic device configured for performance monitoring, performance data. The performance data is indicative of one or more performance parameters associated with the monitored program. The method comprises providing, based on the performance data, an output. Further, a first electronic device is disclosed. The first electronic device comprises a memory, a processor, and an interface. The first electronic device is configured to perform any of the methods disclosed herein.

Disclosed is a computer readable storage medium storing one or more programs, the one or more programs comprising instructions which when executed by an electronic device cause the electronic device to perform any of the methods disclosed herein.

It is an advantage of the present disclosure that the disclosed method and the disclosed first electronic device enable a reduction in performance testing effort since performancebased suggestions (e.g., insights) can be provided to assist a developer in identifying and fixing performance issues immediately. The disclosed method and the disclosed first electronic device may also inform a developer about the impact of making a change in the program (e.g., a change in the code associated with the program). Put differently, the disclosed first electronic device may be capable of processing real-time performance parameters for providing relevant suggestions to a developer upon the change in the program. In other words, the suggestions may inform the developer about how the change in the program can impact the performance of the overall program (e.g., quantitatively). The disclosed method and the disclosed first electronic device may lead to more reliable and high-quality product provided in a faster manner.

Advantageously, the disclosed method and the disclosed first electronic device provide for earlier monitoring of performance issues. Put differently, the disclosed method and the disclosed first electronic device provide for identifying and eliminating performance bottlenecks (e.g., performance issues) in the program early on during the programming and product development process. The disclosed method and the disclosed first electronic device may lead to a reduction in the amount of time required to run performance tests, while avoiding fixing defects in a post-release product phase (e.g., going into production). Put differently, the disclosed method may be cost effective (e.g., leads to money and time savings).

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other features and advantages of the present disclosure will become readily apparent to those skilled in the art by the following detailed description of examples thereof with reference to the attached drawings, in which: Figs. 1A-1B is a diagram illustrating an example first electronic device and an example second electronic device according to this disclosure,

Fig. 2 is a diagram illustrating an example representation of a program comprising one or more operations upon a change in the one or more operations according to this disclosure,

Fig. 3 is a flow-chart illustrating an exemplary method, performed by a first electronic device, for executing a program in a development environment according to this disclosure, and

Fig. 4 is a block diagram illustrating an exemplary first electronic device according to this disclosure.

DETAILED DESCRIPTION

Various exemplary embodiments and details are described hereinafter, with reference to the figures when relevant. It should be noted that the figures may or may not be drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the disclosure or as a limitation on the scope of the disclosure. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described.

The figures are schematic and simplified for clarity, and they merely show details which aid understanding the disclosure, while other details have been left out. Throughout, the same reference numerals are used for identical or corresponding parts.

Even though integrated development environments, IDEs, can provide “coding recommendations” to developers, e.g., for speeding up coding process (e.g., by suggesting a proper data structure to use) and/or for code completion and/or for checking for coding errors, such recommendations are mostly static in nature.

Decisions such as which performance tests should be performed before releasing a software-based product and last-minute decisions (e.g., “Go-NoGo” decisions) may still have a significant impact on the quality and performance of the software-based product, since such decisions may not be taken in an adequate time frame. Detecting performance issues (e.g., bugs, flaws, defects) can be part of a release certification. However, detecting performance issues early on (e.g., during development phase of a software-based product) in conjunction with an impact of performing a change in the code can contribute for the development and release of a robust and almost error free version of the software-based product.

The present disclosure provides techniques for real-time code performance recommendations.

Figs. 1A-1B is a diagram illustrating an example first electronic device 300 and an example second electronic device 400 according to this disclosure.

Fig. 1 A shows a diagram illustrating an example system 1 where the disclosed method is carried out by an example first electronic device 300 according to this disclosure.

The first electronic device 300 has a developer environment that is capable of processing a program. The program can be seen as a program under test or under development. The program may be monitored in the developer environment of the first electronic device 300, based on performance data obtained from a second electronic device 400. In other words, the first electronic device 300 may be seen as a developer electronic device configured to run a developer environment, such as an integrated developer environment.

Fig. 1 A shows a second electronic device 400 configured for performance monitoring, e.g. to provide performance data to the first electronic device 300. The second electronic device 400 is configured to communicate with the first electronic device 300, e.g. via a wired and/or wireless connection. The second electronic device 400 can be seen as an Application Performance Management (APM) device and/or an Application Performance Monitoring (APM) device.

The first electronic device 300 is configured to obtain, from the second electronic device 400, the performance data associated with a program provided in the developer environment. The performance data is indicative of one or more performance parameters associated with the monitored program.

The program has been run or executed at least once so that the performance data is available at the second electronic device 400. In one or more examples, the first electronic device 300 comprises a plugin (e.g., an add on and/or an extension). In other words, the method disclosed herein can be implemented as a plugin to the developer environment. For example, the plugin is configured to obtain the performance data from the second electronic device 400.

The second electronic device 400 may store and/or retrieve the performance data in a first repository 500 (e.g., a data repository, such as a SQL database) via connection 10. The first electronic device 300 may obtain the performance data from the second electronic device 400 by querying the performance data from the first repository 500.

A developer uses the developer environment to develop, edit, change, and/or write one or more programs. A program may be seen as a computer program, such as a computer program stored on a computer readable storage medium. A program includes one or more operations. For example, the one or more operations include one or more instructions and/or one or more routines. For example, an operation can be seen as a piece of a computer program, such as a piece of a computer program code. For example, an operation can be a computer program code.

A change in the one or more operations of the program (e.g., a change performed by a user)) may impact the performance of the program. However, the impact of the change is usually not obtained immediately, but after some testing. The present disclosure allows obtaining immediately an output based on the performance data from the second electronic device 400.

In one or more examples, the performance of the program can be analysed, by the second electronic device 400 and/or by the first electronic device 300 based on the performance data obtained.

For example, upon a change in one or more operations of a program, the first electronic device 300, e.g. via the plugin thereof, can score (e.g., quantify) the change based on the performance data obtained from the second electronic device 400. the first electronic device 300, e.g. via the plugin thereof, can quantify the impact of the change based on the performance data, and provide the output based on the quantified impact. In other words, the first electronic device 300 (e.g., comprising the plugin) can, for example, determine a risk score associated with the one or more operations that indicates the risk of impacting the performance upon a change in the one or more operations. For example, the first electronic device 300 can obtain the performance data from the second electronic device 400 periodically and/or continuously and/or when necessary (e.g., ad hoc). For example, when the program is executed a first time, the second electronic device 400 identifies the performance data associated with the one or more operations and stores the performance data in the first repository 500. The risk score can be, for example, updated whenever (e.g., periodically, and/or every time) the program is being edited, and/or monitored.

In one or more examples, the performance data from the second electronic device 400 is obtained by the first electronic device 300 via the first repository 500, as illustrated by arrow 14. The first repository 500 is populated by the second electronic device 400 with performance data. Put differently, the first electronic device 300 may communicate with the first repository 500, with the second repository 520 being connected to the first repository 500.

In one or more examples, the performance data comprises one or more performance parameters. In one or more examples, the one or more performance parameters can be seen as one or more key performance indicators, KPIs (e.g., resource performance indicators, such as hardware performance indicators, and/or software performance indicators, such as response time and/or throughput and/or central processing unit, CPU, usage and/or memory usage).

For example, the first repository 500 can comprise one or more historical performance parameters and/or a library of functions and/or tools for deriving one or more performance parameters. For example, the second repository 520 can comprise one or more learned performance parameters. For example, there can be communication and/or exchange of the one or more performance parameters between the first repository 500 and the second repository 520, as illustrated by12.

For example, the library of tools for deriving the one or more performance parameters comprise one or more equations and/or one or more formulas to be calculated (e.g., equations and/or formulas for deriving the one or more performance parameters, and/or equations and/or formulas used in previous and/or future executions of the program, and/or equations and/or formulas for deriving a corresponding risk score).

For example, the one or more historical performance parameters are one or more performance parameters obtained by the first electronic device 300 from the second electronic device 400 in previous monitoring cycles (e.g., stages) for corresponding operation(s) in the same program or other programs. In other words, the one or more historical performance parameters can be seen, for example, as one or more performance parameters obtained from the first repository 500.

For example, the one or more learned performance parameters are one or more machine learning, ML, based performance parameters (e.g., one or more performance parameters generated using ML techniques and/or ML models). In one or more examples, machine learning can be seen as identifying characteristics in existing performance data (e.g., the one or more historical parameters and/or the one or more equations and/or one or more formulas comprised in the library of tools) to facilitate making predictions and/or classifications for subsequent performance data, such as the one or more learned performance parameters. For example, the communication and/or exchange of the one or more performance parameters between the first repository 500 and the second repository 520, as illustrated by 12, can be particularly useful for the ML techniques (e.g., ML models). For example, the one or more learned performance parameters are generated based on the one or more historical parameters and/or the one or more equations and/or one or more formulas comprised in the library of tools.

Fig. 1 B shows a diagram illustrating an example process 2 where the disclosed method is carried out by an example first electronic device 300 according to this disclosure.

A program may be monitored in a developer environment of a first electronic device 300, based on performance data obtained from a second electronic device 400. For example, the first electronic device 300 is configured to obtain the performance data from the second electronic device 400, with the second electronic device 400 being configured for performance monitoring. In one or more examples, the first electronic device 300 comprises a plugin. In other words, the plugin is configured to, for example, obtain the performance data from the second electronic device 400. The second electronic device 400 may store the performance data in a repository 580 (e.g., a data repository, such as a SQL database) as illustrate by arrow 16. The first electronic device 300 may obtain the performance data from the second electronic device 400 by querying the performance data from the repository 580.

In one or more examples, the performance of the program can be analysed based on the performance data obtained from the second electronic device 400. For example, upon a change in one or more operations comprising a program, the plugin can score (e.g., quantify) the change based on the performance data obtained from the second electronic device 400.

For example, the first electronic device 300 can obtain the performance data from the second electronic device 400 periodically and/or continuously and/or when necessary (e.g., ad hoc).

In one or more examples, obtaining the performance data from the second electronic device 400 includes obtaining the performance data from the repository 580, as illustrated by arrow 20. Put differently, the first electronic device 300 may communicate with the repository 580 (e.g., extract the one or more performance parameters).

In some examples, the repository 580 includes the first repository 500 (e.g., similar to first repository 500 of Fig. 1 A) and the second repository 520 (e.g., similar to second repository 520 of Fig. 1A). For example, the repository 580 comprises one or more historical performance parameters and/or one or more learned performance parameters and/or a library of tools for deriving one or more performance parameters. For example, the first repository 500 comprises one or more historical performance parameters and/or a library of tools for deriving one or more performance parameters. For examples, the second repository 500 comprises one or more learned performance parameters. For example, there can be communication and/or exchange of the one or more performance parameters between the first repository 500 and the second repository 520, as illustrated in arrow 18.

For example, the library of tools for deriving the one or more performance parameters comprise one or more equations and/or one or more formulas to be calculated. For example, the one or more historical performance parameters are one or more performance parameters obtained by the first electronic device from the second electronic device in previous monitoring cycles. For example, the one or more learned performance parameters are one or more machine learning, ML, based performance parameters. For example, ML engine 320 (e.g., ML technique and/or ML model) generates the one or more learned performance parameters taking as input the one or more historical performance parameters and/or the library of tools comprised in the first repository 500, as illustrated by arrow 22. The first electronic device 300 comprises the ML engine 320. For example, the plugin comprises the ML engine 320. In some examples, the one or more learned performance parameters are stored in the second repository 520 comprised in repository 580, as illustrated by arrow 22.

Fig. 2 shows a diagram illustrating an example representation of a program 10 comprising one or more operations upon a change in the one or more operations according to this disclosure. The program 10 comprises, for example, operations a(), b(), c() and d(). A user intends to make a change in operation illustrated by12.

In one or more examples, a first electronic device can obtain from the second electronic device a response time parameter as a performance parameter (e.g., performance metric). For example, the response time parameter can include an overall response time associated with the program and/or response times associated with the operations comprising the program 10. For example, the program being monitored (e.g., program 10) comprises four operations, e.g., operation a(), operation b(), operation c() and operation d(). For example, the response time to execute the program 10 (e.g., overall response time) is 200 milliseconds, in which 120 milliseconds are spent on operation d(), 30 milliseconds on operation b(), 40 milliseconds on operation c() and 10 milliseconds on operation a(). In one or more examples, the first electronic device (e.g., the plugin) can generate based on the response time parameter (e.g., metric) obtained from the second electronic device, a risk score associated with each operation of the operations a(), b(), c() and d(). For example, the risk score may be higher for an operation associated with higher response time parameter. For example, the plugin scores operation d() with 6 (e.g. ,(120/200) X 10), the operation b() with 1.5, the operation c() with 2 and the operation a() with 0.5. The risk score comprises, for example, the risk scores associated with the operations a(), b(), c() and d().

For example, the first electronic device can determine the risk score associated with respective operations a(), b(), c() and d() upon a change in respective operations a(), b(), c() and d(). In other words, the first electronic device can, for example, provide a user with the risk score associated with the respective operations a(), b(), c() and d() when the user performs a change in the respective operations a(), b(), c() and d(). In one or more examples, the risk score comprises the risk scores associated with operations a(), b(), c() and d() (e.g., 6 for operation d(), 1 .5 for operation b(), 2 for operation c() and 0.5 for operation d()) generated, based on the response time parameter obtained from the second electronic device, by the first electronic device (e.g., the plugin). In one or more examples, the first electronic device provides a recommendation 14 (e.g., a performance recommendation) as an output to an external device. In some examples, the recommendation comprises the risk score associated with operation d (e.g., operation 12), such as the operation being changed by the user. The user interface object 14 representative of the recommendation (e.g. output disclosed herein) may inform the user (e.g., a developer) about how the change, which is performed by the user, in operation d (e.g., operation 12) can impact (e.g., affect) the performance (e.g., overall performance) of the program 10. In other words, the user interface object 14 may inform the user about how the response time parameter is impacted by the change in operation d (e.g., operation 12). For example, operation d (e.g., operation 12) is a high rating operation (e.g., has the highest score when compared with operations a(), b() and c()), implying that a change in the corresponding operation may lead to a decrease in the overall performance of the program. The first electronic device (e.g., the plugin) can show the user interface object 14 using e.g., a popup and/or a notification.

Fig. 3 shows a flow-chart of an exemplary method 100, performed by a first electronic device according to the disclosure. The first electronic device is the first electronic device disclosed herein, such as the first electronic device 300 of Fig. 1A-1 B, and Fig. 4. The method 100 may be performed for monitoring performance of a program, such as for enabling a first electronic device to monitor a program in a development environment

The method 100 comprises monitoring S104 a program in a developer environment of the first electronic device. Monitoring S104 the program comprises obtaining S104A, from a second electronic device configured for performance monitoring, performance data. In other words, the first electronic device monitors the program by obtaining (e.g. receiving and/or retrieving) performance data from the second electronic device for performance monitoring (e.g. device 400 of Figs. 1A-B). The performance data is indicative of one or more performance parameters associated with the monitored program. The method 100 comprises providing S108, based on the performance data, an output. The output is for example for assisting a programmer in editing and/or developing the program. The output can include a parameter for assisting a programmer in editing and/or developing the program. In one or more examples, providing, based on the performance data, an output can be seen as providing, by the first electronic device, a recommendation and/or suggestion associated with a change performed in the monitored program as illustrated in Fig. 2. For example, the first electronic device comprises a developer environment. For example, the developer environment can be seen as an integrated development environment, IDE. For example, the IDE can be seen as a development environment that provides tools to developers (e.g., software developers) for software development and testing. An IDE can be seen as a software application.

In one or more examples, the program can be seen as a set of instructions and/or a set of operations written in a programming language to perform a task by a computing system, such as the first electronic device. In one or more examples, the programming language can be one or more of: Java, JavaScript, C++, C, PHP, Python, Ruby, and any other suitable programming language. In one or more examples, the program is developed (e.g., written) in the developer environment (e.g., IDE). In other words, the program is the program under development and/or under test.

The second electronic device can include, for example, an application performance monitoring, APM, tool or an APM device. In one or more example, the APM tool can be seen as a monitoring and analytics tool that can be used to determine (e.g., to track) performance parameters and monitor, based on the performance parameters, the performance of programs, such as of software applications. In other words, the second electronic device is configured for performance monitoring. An APM tool can be seen as part of a monitoring service.

In one or more examples, monitoring the program in the developer environment of the first electronic device comprises obtaining the performance data from the second electronic device. In other words, monitoring the program in the developer environment of the first electronic device can, for example, be seen as retrieving the performance data (e.g., the one or more performance parameters) at the first electronic device from the second electronic device. Stated differently, the first electronic device may periodically synchronize the performance data with the second electronic device to keep up to date on the performance parameters associated with corresponding operations of one or more programs (e.g. under monitoring, and/or previously monitored). This can be particularly advantageous for a more efficient software development since the disclosed method enables, for example, the integration of performance data obtained from the second electronic device (e.g., APM tool) into the first electronic device (e.g., IDE), with the first electronic device being where the program is developed. In one or more example methods, monitoring S104 the program comprises monitoring S104B one or more operations of the program. In one or more example methods, the performance data is indicative of one or more performance parameters associated with the one or more operations of the monitored program. For example, the program can comprise one or more operations. The one or more operations are, for example, one or more routines, one or more instructions, one or more functions, one or more procedures, one or more methods, and/or one or more subprograms, which can be executed anywhere in the program. An operation can be, for example, a sequence of code, intended for the execution of a program. The program comprising the one or more operations may be executed (e.g., run) in an API.

In one or more examples, the one or more performance parameters can be associated with corresponding one or more operations of the monitored program. In other words, the monitored program can, for example, be associated with the one or more performance parameters. Each operation of the one or more operations is, for example, associated with one or more corresponding performance parameters. For example a first operation may be associated with one performance parameter, such as response time while a second operation may be associated with resource utilisation parameter and throughput.

In one or more examples, the performance data indicates the one or more performance parameters associated with the monitored program. Put differently, the performance data may comprise, for example, the one or more performance parameters.

In one or more examples, the program is executed at least once before the first electronic device obtaining the performance data from the second electronic device. For example, when the program is executed a first time, the second electronic device identifies the performance data associated with the program and stores the performance data in a repository (e.g., a data repository). More specifically, when the program is executed a first time, the second electronic device determines the performance data associated with the one or more operations of the program and stores the corresponding performance data associated with the respective operations in a repository (e.g., a data repository).

In one or more example methods, the one or more performance parameters comprise one or more of: a response time, a throughput, and a resource utilisation parameter. In one or more examples, the one or more performance parameters can be seen as one or more performance metrics and/or one or more key performance indicators, KPIs. The response time is, for example, the time for a first system node (e.g., an application server and/or an application programming interface, API) to respond to a request from a second system node (e.g., a client and/or a user). The throughput can be, for example, the number of tasks (e.g., tasks associated with the request from a second system node) processed per time unit by the first system node (e.g., an application server and/or an application programming interface, API). The resource utilisation parameter can be seen as parameter indicating how much a resource (e.g. hardware resource and/or software resource) is utilized by an operation of a program. The resource utilisation parameter can include a hardware resource utilization parameter and/or a software resource utilization parameter. In one or more examples, the hardware resources comprise central processing unit, CPU, utilisation, memory utilisation, hardware accelerators and/or hardware clocks. The CPU utilisation (e.g., CPU usage) can be, for example, a percentage of time the CPU spends in handling an operation and/or a task (e.g., a task associated with the request from the second system node). The memory utilisation (e.g., memory usage) can be, for example, an amount of memory required to process the request.

In one or more example methods, the method 100 comprises generating S106, based on the performance data, a risk score associated with the one or more operations. In one or more example methods, the risk score is indicative of a risk of impacting a performance parameter of the one or more operations upon a change in the one or more operations. For example, the first electronic device is configured to obtain the performance data (e.g., comprising one or more performance parameters) from the second electronic device. For example, the first electronic device (e.g., an IDE) includes a plugin configured to perform the method 100. The plugin can, for example, obtain (e.g., fetch) the performance data from the second electronic device (e.g., an APM tool). For example, upon the change in the one or more operations comprising the program, the plugin can score (e.g., quantify) the change based on the performance data obtained from the second electronic device. In other words, upon the first electronic device obtaining the one or more performance parameters from the second electronic device, the first electronic device (e.g., comprising the plugin) can, for example, determine the risk score (e.g., a risk index) associated with the one or more operations. Each operation of the one or more operations can have, for example, a risk score. In some examples, the risk score can be calculated by scoring, based on the one or more performance parameters obtained from the second electronic device, each operation. The risk score can, for example, comprise a risk score for each operation of the one or more operations and/or for each performance parameter of the one or more performance parameters. The risk score indicates, for example, a risk that can result from making a change in the one or more operations based on the score (e.g., one or more weights or coefficients, and/or “weightage”) generated based on the one or more performance parameters obtained from the second electronic device by the first electronic device. In some examples, the risk score can be calculated as explained in Fig. 2.

In one or more examples, the risk score can be updated whenever (e.g., every time) the program is monitored. For example, the program is monitored by the first electronic device when necessary (e.g., ad hoc) and/or periodically and/or continuously. For example, monitoring the program can be seen as a periodic and/or continuous and/or when needed synching of the performance data. In other words, monitoring the program can, for example, be seen as a periodic and/or continuous and/or when needed synching of performance data between the first electronic device and the second electronic device.

In one or more examples, the synchronizing of performance data between the first electronic device and the second electronic device comprises obtaining, from the second electronic device, performance data in a periodic manner. For example, the periodic synching of performance data between the first electronic device and the second electronic device can be performed from time to time (e.g., periodic synchronisation). For example, the synching of performance data between the first electronic device and the second electronic device can be performed whenever there is a change in the one or more operations (e.g., ad hoc synchronisation). For example, the synching of performance data between the first electronic device and the second electronic device can be performed continuously (e.g., permanently).

In one or more example methods, the output comprises the risk score associated with the one or more operations. For example, upon a change in one or more operations, the first electronic device (e.g., the plugin) scores the change based on the performance data obtained from the second electronic device and provides the risk score. For example, upon scoring the change (e.g., generating the risk score), the first electronic device provides performance recommendations and/or suggestions as the output to an external device, as illustrated in Fig. 2. In some examples, the performance recommendations and/or suggestions comprise the risk score associated with the one or more operations and/or the one or more performance parameters indicated by the performance data.

In one or more example methods, obtaining S104A the performance data from the second electronic device comprises generating S104AA the performance data by querying a repository. For example, the first electronic device (e.g., the IDE) is capable of querying a repository (e.g., a database and/or a data warehouse and/or a SQL database and/or SQL data warehouse) to retrieve performance data (e.g., requesting for performance data and/or collecting performance data), optionally via the second electronic device. In one or more examples, the second electronic device (e.g., the APM tool) stores the performance data in the repository. In one or more examples, the second electronic device provides the first electronic device with the performance data.

In one or more example methods, the repository comprises one or more of: a library of tools for deriving one or more performance parameters, one or more historical performance parameters, one or more learned performance parameters, and a repository risk score associated with a corresponding operation. For example, the repository can be a first repository for storing the one or more historical performance parameters, such as first repository 500 of Fig. 1A. In some examples, there is a second repository for storing learned performance parameters, such as second repository 520 of Fig. 1A. In some examples, the repository includes the first repository and the second repository (e.g., the one or more historical performance parameters and the one or more learned performance parameters), such as repository 580 including first repository 500 and second repository 520 of Fig. 1B. For example, there can be communication and/or exchange of performance data between the first repository and the second repository.

In some examples, the first electronic device is configured to communicate with the repository (e.g., the first repository and/or the second repository). In some examples, the second repository is not connected with the second electronic device, however the second repository is connected with the first repository, with the first repository being connected to the second electronic device.

For example, the library of tools for deriving one or more performance parameters comprises one or more equations and/or one or more formulas to be calculated (e.g., equations and/or formulas used in previous and/or future executions of the program, with the equations and/or formulas having, for example, a corresponding risk index).

For example, the one or more historical performance parameters are one or more performance parameters obtained by the first electronic device from the second electronic device in previous monitoring cycles (e.g., stages).

For example, the repository risk score comprises the risk score generated for a corresponding operation. The repository risk score can, for example, comprise the risk score associated with the one or more historical parameters and/or the one or more learned parameters and/or the one or more equations and/or one or more formulas comprised in the library of tools (e.g., generated in previous monitoring cycles) for a corresponding operation. For example, the repository risk score includes one or more risk scores associated with a corresponding operation and/or a corresponding performance parameter.

In one or more example methods, the one or more learned performance parameters are based on the one or more historical performance parameters and/or the repository risk score. The second electronic device for example determines the one or more learned performance parameters based on the one or more historical performance parameters and/or the repository risk score. The repository risk score is for example the risk score stored at the repository for a given operation. For example, the one or more learned performance parameters are one or more machine learning, ML, based performance parameters, such as one or more performance parameters generated using ML techniques. The ML based techniques, in some examples, take as input the one or more historical performance parameters and/or the repository risk score to provide as the output the one or more learned performance parameters. The ML based techniques, in some examples, take as input the one or more historical performance parameters and/or the repository risk score and/or the library of tools to provide as output the one or more learned performance parameters.

The ML techniques can be one or more of: Convolutional Neural Network, Feedforward Neural Network, FNN, Random Forest, Support Vector Machine, SVM, and any other suitable ML technique. For example, the ML techniques can output methods (e.g., formulas) for generating (e.g., deriving) the one or more learned performance parameters. In some examples, the ML techniques can generate the one or more learned performance parameters with, for example, a corresponding risk score. In some examples, the one or more learned performance parameters are one or more operations and/or part of one or more operations, such as one or more equations and/or one or more formulas which are not part of the monitored program. For example, the disclosed method can be advantageous when a developer adds a new formula and/or equation to an operation, and/or a new operation to the program. The disclosed method can, in some examples, provide the developer with the risk score associated with the new formula and/or new equation and/or new operation.

In some examples, the ML technique is based on a feed-forward neural network, FNN, taking as input the one or more historical performance parameters and/or the library of tools and/or the repository risk score to provide the one or more learned performance parameters. The FNN is, for example, is configured to handle the one or more historical performance parameters and/or the library of tools and/or the repository risk score from input node(s), and to process the information in only one direction (e.g., forward) from the input nodes, through the hidden nodes (if any) and to the output nodes for provision of the one or more learned performance parameters. For example, the FNN does not process in cycles.

In some examples, the ML technique is based on a random forest. For example, for classification tasks, the output of the random forest is the class (which may be the one or more learned performance parameters) predicted by a majority of the multitude of decision trees, based on the input being the one or more historical performance parameters and/or the library of tools and/or the repository risk score. For example, for regression tasks, the mean or average prediction of the individual trees is returned as the one or more learned performance parameters based on the one or more historical performance parameters and/or the library of tools and/or the repository risk score.

In some examples, the ML technique is based on a Support Vector Machine, SVM. A SVM is, for example, a supervised learning model with associated learning techniques that analyse the one or more historical performance parameters and/or the library of tools and/or the repository risk score for classification and regression analysis. For example, given a set of training examples, each training example belonging to one of two categories, an SVM training technique builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier and/or using a probabilistic classification setting. For example, SVM maps training examples to points in space to maximize the width of the gap between the two categories. For example, SVM can take as input the one or more historical performance parameters and/or the library of tools and/or the repository risk score and provide the one or more learned performance parameters. For example, SVM would separate non-linear input parameters by transforming them into a higher-dimensional feature space, such that classes that are then linearly separable to represent the one or more learned performance parameters.

In one or more example methods, generating S106 the risk score comprises generating S106A the risk score associated with the one or more operations, based on the one or more historical performance parameters and/or the one or more learned performance parameters. In one or more examples, the risk score is associated with the one or more operations of the program under monitoring. In some examples, the risk score is associated with the corresponding one or more operations and/or the corresponding one or more performance parameters. The risk score is generated by the first electronic device (e.g., by the plugin) based on the one or more historical performance parameters and/or the one or more learned performance parameters and/or the one or more equations and/or one or more formulas comprised in the library of tools for a corresponding operation.

In one or more example methods, the method 100 comprises performing S102 a handshake between the first and the second electronic device, e.g., for establishing a connection between the first electronic device and the second electronic device. In one or more examples, the handshake between the first and the second electronic device is performed before monitoring the program in the developer environment of the first electronic device (e.g., before S104). In one or more examples, performing the handshake between the first and the second electronic device comprises creating and/or installing the plugin (e.g., an add on and/or an extension), with the plugin being installed in the first electronic device. For example, the plugin can obtain (e.g., fetch) performance data from a second electronic device over a network accommodating communication between the first and the second electronic device. Put differently, the first electronic device (e.g., an IDE) is capable of handshaking with the second electronic device (e.g., an APM tool) and fetching, from the second electronic device, performance data (e.g., performance metrics) to analyse how a change in the one or more operation can impact the overall performance of the program.

In one or more example methods, the first electronic device comprises a display device. In one or more example methods, providing S108 the output comprises displaying S108A, on the display device, a user interface object representative of the output. For example, the user interface object representative of the output can be one or more of: a popup, a toast (e.g., a small popup, that can automatically disappear after a timeout), a suggestion, a message, and a notification. This is for example illustrated in Fig. 2.

In one or more example methods, the user interface object indicates, to a user, a recommendation based on the risk score upon the change in the one or more operations.

For example, the recommendation comprises the risk score associated with the one or more operations, as illustrated in Fig. 2. In some examples, the risk score can be associated with a corresponding performance parameter and/or a corresponding operation. In other words, there can be multiple changes in the program. Upon multiple changes in the one or more operations, the recommendation can comprise, for example, a risk score including a plurality of risk scores (e.g., a risk score associated with a corresponding operation, such as an operation changed by the user, and a corresponding performance parameter, such as the performance parameter affected by the change). In some examples, upon multiple changes in the one or more operations, the recommendation can comprise a most relevant risk score. In one or more examples, the most relevant risk score is the risk score that impacts the most the overall performance of the program (e.g., the highest risk score associated with a corresponding operation and a corresponding performance parameter).

In some examples, upon multiple changes in the one or more operations, the recommendation can comprise an average risk score, which can be generated based on the plurality of risk scores. In other words, the average risk can, for example, be generated per operation by averaging plurality of risk scores associated with the one or more performance parameters affected by the change in a corresponding operation.

In one or more example methods, providing S108 the output comprises transmitting S108Bthe output to an external device. In one or more examples, providing the recommendation as the output to an external device comprises providing the recommendation to inform a user (e.g., a developer) about one or more of: one or more operations where the user performs a change, one or more performance parameters affected by the change and corresponding risk scores. In other words, providing the recommendation comprises, for example, informing the user about which and how the one or more performance parameters (e.g., by generating the risk score) are impacted by a change about to be carried out on the one or more operations, as illustrated in Fig. 2.

In one or more example methods, the one or more performance data comprise the one or more performance parameters. In one or more examples, the performance data obtained from the second electronic device by the first electronic device includes the one or more performance parameters associated with the program under test (e.g., the monitored program). In one or more examples, a change in the one or more operations comprising the program can impact the performance of the program in test. In one or more examples, the performance of the program in test can be analysed based on the one or more performance parameters.

In one or more example methods, the performance data comprise real time performance data. For example, the first electronic device obtains the performance data from the second electronic device periodically and/or continuously and/or when necessary. In some examples, the performance data comprise “quasi” real-time performance data and/or “near” real-time performance data (e.g., the first electronic device obtains the performance data periodically and/or when needed from the second electronic device). In some examples, the performance data comprise real-time performance data (e.g., the first electronic device obtains the performance data continuously from the second electronic device).

Fig. 4 shows a block diagram of an exemplary first electronic device 300 according to the disclosure. The first electronic device 300 comprises a memory 301 , a processor 302, and an interface 303. The first electronic device 300 may be configured to perform any of the methods disclosed in Fig. 3. In other words, the first electronic device 300 may be configured for monitoring a program in a development environment. The electronic device disclosed herein is the first electronic device 300. The terms “electronic device” and “first electronic device” are interchangeable in some embodiments.

The first electronic device 300 is configured to monitor (e.g., using the processor 302) a program in a developer environment of the first electronic device. The first electronic device 300 is configured to monitor by obtaining (e.g., via the interface 303 and/or using the memory 301), from a second electronic device configured for performance monitoring, performance data. The performance data is indicative of one or more performance parameters associated with the monitored program.

The first electronic device 300 is configured to provide (e.g., using the processor 302 and/or the interface 303), based on the performance data, an output.

The first electronic device 300 is optionally configured to perform any of the operations disclosed in Fig. 3 (such as any one or more of S102, S104, S104A, S104AA, S104B, S106, S106A, S108, S108A, S108B). The operations of the first electronic device 300 may be embodied in the form of executable logic operations (for example, lines of code, software programs, etc.) that are stored on a non-transitory computer readable medium (for example, the memory 301 ) and are executed by the processor 302.

Furthermore, the operations of the first electronic device 300 may be considered a method that the first electronic device 300 is configured to carry out. Also, while the described functions and operations may be implemented in software, such functionality may also be carried out via dedicated hardware or firmware, or some combination of hardware, firmware and/or software. The memory 301 may be one or more of: a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, a random access memory (RAM), and any other suitable device. In a typical arrangement, the memory 301 may include a non-volatile memory for long term data storage and a volatile memory that functions as system memory for the processor 302. The memory 301 may exchange data with processor 302 over a data bus. Control lines and an address bus between the memory 301 and the processor 302 also may be present (not shown in Fig. 4). The memory 301 is considered a non-transitory computer readable medium.

The memory 301 may be configured to store performance data indicative of one or more performance parameters, a risk score in a part of the memory.

Examples of methods and products (an electronic device, such as a first electronic device) according to the disclosure are set out in the following items:

Item 1. A method, performed by a first electronic device, the method comprising: monitoring (S104) a program in a developer environment of the first electronic device; wherein monitoring (S104) the program comprises obtaining (S104A), from a second electronic device configured for performance monitoring, performance data indicative of one or more performance parameters associated with the monitored program; and providing (S108), based on the performance data, an output.

Item 2. The method according to item 1 , wherein monitoring (S104) the program comprises: monitoring (S104B) one or more operations of the program; wherein the performance data is indicative of one or more performance parameters associated with the one or more operations of the monitored program.

Item 3. The method according to item 2, wherein the method comprises generating (S106), based on the performance data, a risk score associated with the one or more operations, wherein the risk score is indicative of a risk of impacting a performance parameter of the one or more operations upon a change in the one or more operations.

Item 4. The method according to any of items 2-3, wherein the output comprises the risk score associated with the one or more operations. Item 5. The method according to any of the previous items, wherein obtaining (S104A) the performance data from the second electronic device comprises generating (S104AA) the performance data by querying a repository.

Item 6. The method according to item 5, wherein the repository comprises one or more of: a library of tools for deriving one or more performance parameters, one or more historical performance parameters, one or more learned performance parameters, and a repository risk score associated with a corresponding operation.

Item 7. The method according to items 3 and 6, wherein the one or more learned performance parameters are based on the one or more historical performance parameters and/or the repository risk score.

Item 8. The method of any of items 6-7, wherein generating (S106) the risk score comprises generating (S106A) the risk score associated with the one or more operations, based on the one or more historical performance parameters and/or the one or more learned performance parameters.

Item 9. The method according to any of the previous items, the method comprising: performing (S102) a handshake between the first and the second electronic device.

Item 10. The method according to any of the previous items, wherein the first electronic device comprises a display device; wherein providing (S108) the output comprises displaying (S108A), on the display device, a user interface object representative of the output.

Item 11 . The method according to any of items 3 and 10, wherein the user interface object indicates, to a user, a recommendation based on the risk score upon the change in the one or more operations.

Item 12. The method according to any of the previous items, wherein providing (S108) the output comprises transmitting (S108B), the output to an external device.

Item 13. The method according to any of the previous items, wherein the one or more performance parameters comprise one or more of: a response time, a throughput, and a resource utilisation parameter.

Item 14. The method according to any of the previous items, wherein the performance data comprise the one or more performance parameters. Item 15. The method according to any of the previous items, wherein the performance data comprise real time performance data.

Item 16. An electronic device comprising a memory, a processor, and an interface, wherein the electronic device is configured to perform any of the methods according to items 1-15.

Item 17. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device cause the electronic device to perform any of the methods of items 1-15.

The use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not imply any particular order but are included to identify individual elements. Moreover, the use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not denote any order or importance, but rather the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used to distinguish one element from another. Note that the words “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used here and elsewhere for labelling purposes only and are not intended to denote any specific spatial or temporal ordering. Furthermore, the labelling of a first element does not imply the presence of a second element and vice versa.

It may be appreciated that Figures comprise some circuitries or operations which are illustrated with a solid line and some circuitries, components, features, or operations which are illustrated with a dashed line. The circuitries or operations which are comprised in a solid line are circuitries or operations which are comprised in the broadest example embodiment. The circuitries or operations which are comprised in a dashed line are example embodiments which may be comprised in, or a part of, or are further circuitries or operations which may be taken in addition to the circuitries or operations of the solid line example embodiments. It should be appreciated that these operations need not be performed in order presented. Furthermore, it should be appreciated that not all of the operations need to be performed. The exemplary operations may be performed in any order and in any combination.

It is to be noted that the word "comprising" does not necessarily exclude the presence of other elements or steps than those listed. It is to be noted that the words "a" or "an" preceding an element do not exclude the presence of a plurality of such elements.

It should further be noted that any reference signs do not limit the scope of the claims, that the examples may be implemented at least in part by means of both hardware and software, and that several "means", "units" or "devices" may be represented by the same item of hardware.

The various example methods, devices, nodes, and systems described herein are described in the general context of method steps or processes, which may be implemented in one aspect by a computer program product, embodied in a computer- readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Generally, program circuitries may include routines, programs, objects, components, data structures, etc. that perform specified tasks or implement specific abstract data types. Computer-executable instructions, associated data structures, and program circuitries represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.

Although features have been shown and described, it will be understood that they are not intended to limit the claimed disclosure, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the scope of the claimed disclosure. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. The claimed disclosure is intended to cover all alternatives, modifications, and equivalents.