Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR MULTIMEDIA-BASED PERFORMANCE MONITORING OF AN EQUIPMENT
Document Type and Number:
WIPO Patent Application WO/2019/097412
Kind Code:
A1
Abstract:
A system for multimedia-based performance monitoring of an equipment is disclosed. The system includes a multimedia-based monitoring device located in proximity to the equipment and includes a sound detection device configured to collect a plurality of sound signals in real-time from the equipment, a visual detection device configured to capture visual data related to the equipment, a processor configured to analyse the plurality of sound signals and the visual data based on one or more predefined analytics models, identify performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data. The multimedia-based device includes an analytical subsystem configured to update the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data, learn from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source.

Inventors:
SINGH SANJAY (IN)
DHUMAL RANJEET (IN)
PANT ANIRUDDHA (IN)
DESHPANDE ANAND (IN)
Application Number:
PCT/IB2018/058941
Publication Date:
May 23, 2019
Filing Date:
November 14, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ASQUARED IOT PVT LTD (IN)
International Classes:
G01N29/44
Foreign References:
CA2632490C2016-01-26
US20070069943A12007-03-29
Attorney, Agent or Firm:
AGRAWAL, Dinkar (IN)
Download PDF:
Claims:
WE CLAIM:

1. A system (10) for multimedia-based performance monitoring of an equipment comprising: a multimedia-based monitoring device (20) located in proximity to the equipment, wherein the multimedia-based monitoring device (20) comprises: a sound detection device (60) configured to collect a plurality of sound signals in real-time from the equipment; a visual detection device (70) operatively coupled to the sound detection device, and configured to capture visual data related to the equipment; a processor (80) operatively coupled to the sound detection device (60) and the visual detection device (70), wherein the processor (80) is configured to: acquire one or more predefined analytics models associated with the equipment; analyse the plurality of sound signals and the visual data based on the one or more predefined analytics models; identify performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data; an analytical subsystem (90) operatively coupled to the processor (80) and configured to: update the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data; and learn from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source.

2. The system (10) as claimed in claim 1, wherein the sound detection device (60) comprises a microphone.

3. The system (10) as claimed in claim 1, wherein the visual detection device (70) comprises a still camera, a motion picture camera or a video camera. 4. The system (10) as claimed in claim 1, wherein the visual data comprises an image or a video.

5. The system (10) as claimed in claim 1, wherein the processor (80) is further configured to send a real time notification to an external computing device upon detecting one or more faults during the performance monitoring of the equipment. 6. The system (10) as claimed in claim 5, wherein the real time notification comprises, a graphic, a vibration, a short message service (SMS) message or an alarm sound.

7. The system (10) as claimed in claim 5, wherein the external computing device comprises a mobile phone, a tablet, a laptop or a computer. 8. The system (10) as claimed in claim 1, wherein the processor (80) is configured to calculate one or more operational patterns to identify one or more operational statistics of the equipment.

9. The system (10) as claimed in claim 1, wherein the processor (80) is hosted on a cloud-based server platform. 10. The system (10) as claimed in claim 1, wherein the processor (80) is configured to identify a plurality of classes of the plurality of sound signals based on the combination of the plurality of sound signals and the visual data and the one or more predefined analytics models.

11. The system (10) as claimed in claim 1, wherein the one or more predefined analytics model comprises one or more sound classification-based models, one or more acoustic feature-based models, one or more artificial neural network-based models, one or more Deep Learning based models, or a combination thereof.

12. The system (10) as claimed in claim 1, further comprising a casing (30) configured to house the multimedia-based monitoring device.

13. The system (10) as claimed in claim 12, wherein the sound detection device (30) is located outside the casing (30). 14. The system (10) as claimed in claim 12, wherein the casing (30) comprises a memory device (40) and a plurality of peripheral devices (50).

15. A method (150) comprising: collecting, by a sound detection device, a plurality of sound signals in real-time from the equipment; (160) capturing, by a visual detection device, visual data related the equipment; (170) acquiring, by a processor, one or more predefined analytics models associated with the equipment; (180) analysing, by the processor, the plurality of sound signals and the visual data based on the one or more predefined analytics models; (190) identifying, by the processor, performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data; (200) updating, by an analytical subsystem, the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data; (210) and learning, by the analytical subsystem, from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source. (220)

16. The method (150) as claimed in claim 15, wherein identifying, by the processor, performance of the equipment by combining the plurality of analysed sound signals and the analysed visual data comprises identifying one or more faults of the equipment.

17. The method (150) as claimed in claim 16, wherein identifying the one or more faults of the equipment comprises identifying one or more classes of the plurality of sound signals based on the combination of the plurality of sound signals and the visual data and the one or more predefined analytics models. 18. The method (150) as claimed in claim 15, wherein updating, by the sound analytics system, the one or more analytics models based on the plurality of sound signals comprises updating the one or more analytics models upon calculating one or more operational patterns to identify one or more operational statistics of the equipment. 19. The method (150) as claimed in claim 15, further comprising sending a real time notification to an external computing device upon detecting one or more faults during the performance monitoring of the equipment.

Description:
SYSTEM AND METHOD FOR MULTIMEDIA-BASED PERFORMANCE

MONITORING OF AN EQUIPMENT

This International Application claims priority from a provisional patent application filed in India having Patent Application No. 201721040616, filed on November 14, 2017 and titled“SYSTEM AND METHOD FOR DETECTING EQUIPMENT OPERATION AND PERFORMANCE USING SOUND, IMAGE AND VIDEO”.

BACKGROUND

Embodiments of a present disclosure relate to analytics and more particularly to a system and a method for multimedia-based monitoring of performance of an equipment. Every instrument and sensor mounted on equipment’s and manufacturing process or assembly lines worked on wire-lined industrial buses connected to a human equipment interface sometimes referred to herein as ("HMI"). The majority of collected sensed data is required for effective control of the equipment or manufacturing process. In many areas of manufacturing or engineering, trouble-free operation of a plant depends on proper functioning of the equipment’s used in the plant. To avoid irregular interruptions or damage of the equipment, faults should be detected in the initial stage as possible or before a failure of the equipment may cause a shutdown. In instances, the failure or beginning of failure of any equipment may be accompanied by characteristic sounds. The cause of the undesirable sound needs to be identified for the equipment to continue equipment’ s serviceable life. Various fault detection techniques, more particularly to fault detection techniques with acoustic monitoring are available to detect the fault in the early stage or before failure.

Traditional systems utilize a variety of sensors which measure a plurality of parameters such as current, voltages, temperatures or pressures. Such sensors are fixed to a device to receive the plurality of parameters. However, such system leads to extra efforts and are costly.

Furthermore, some system utilizes acoustic sensor which uses sound inputs. However, such acoustic sensor does not have the capabilities of analysing complex sounds. Further, such sensors are designed for identifying very specific sound signature and their functionality is unchangeable over time. Furthermore, the available acoustic sensors only use a threshold checking and give an alarm or indication whenever the threshold is crosses. However, such sensors do not provide the in-depth analysis of sound and sometimes leads to ambiguous results.

Furthermore, some system uses video camera for monitoring equipment’s and devices. However, such videos are primarily used for surveillance and monitoring using manual methods and their output is not correlated with sound signals.

Hence, there is a need for an improved system and method for multimedia-based performance monitoring of an equipment to address the aforementioned issue(s).

BRIEF DESCRIPTION

In accordance with an embodiment of the present disclosure, a system for multimedia- based performance monitoring of an equipment is provided. The system includes a multimedia-based monitoring device located in proximity to the equipment. The multimedia-based monitoring device includes a sound detection device configured to collect a plurality of sound signals in real-time from the equipment. The multimedia- based monitoring device also includes a visual detection device operatively coupled to the sound detection device. The visual detection device is configured to capture visual data related to the equipment. The multimedia-based monitoring device further includes a processor operatively coupled to the sound detection device and the visual detection device. The processor is configured to acquire one or more predefined analytics models associated with the equipment. The processor is also configured to analyse the plurality of sound signals and the visual data based on the one or more predefined analytics models. The processor is further configured to identify performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data. The multimedia-based monitoring device further includes an analytical subsystem operatively coupled to the processor. The analytical subsystem is configured to update the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data. The analytical subsystem is also configured to learn from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source.

In accordance with another embodiment of the present disclosure, a method for monitoring the performance of an equipment is provided. The method includes collecting, by a sound detection device, a plurality of sound signals in real-time from the equipment. The method also includes capturing, by a visual detection device, visual data related the equipment. The method further includes acquiring, by a processor, one or more predefined analytics models associated with the equipment. The method further includes analysing, by the processor, the plurality of sound signals and the visual data based on the one or more predefined analytics models. The method further includes identifying, by the processor, performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data. The method further includes updating, by an analytical subsystem, the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data. The method further includes learning, by the analytical subsystem, from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source.

To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.

BRIEF DESCRIPTION OF THE DRAWINGS

The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:

FIG. 1 illustrates a block diagram of a system for multimedia-based performance monitoring of an equipment in accordance with an embodiment of the present disclosure; FIG. 2 illustrates a block diagram of an exemplary system for multimedia-based performance monitoring of an equipment of FIG. 1 in accordance with an embodiment of the present disclosure; and

FIG. 3 illustrates a flow chart representing the steps involved in a method for monitoring the performance of an equipment of FIG. 1 in accordance with an embodiment of the present disclosure.

Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.

DETAILED DESCRIPTION For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.

The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.

Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.

In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms“a”,“an”, and“the” include plural references unless the context clearly dictates otherwise.

Embodiments of the present disclosure relate to a system for multimedia-based performance monitoring of an equipment. The system includes a multimedia-based monitoring device located in proximity to the equipment. The multimedia-based monitoring device includes a sound detection device configured to collect a plurality of sound signals in real-time from the equipment. The multimedia-based monitoring device also includes a visual detection device operatively coupled to the sound detection device. The visual detection device is configured to capture visual data related to the equipment. The multimedia-based monitoring device further includes a processor operatively coupled to the sound detection device and the visual detection device. The processor is configured to acquire one or more predefined analytics models associated with the equipment. The processor is also configured to analyse the plurality of sound signals and the visual data based on the one or more predefined analytics models. The processor is further configured to identify performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data. The multimedia-based monitoring device further includes an analytical subsystem operatively coupled to the processor. The analytical subsystem is configured to update the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data. The analytical subsystem is also configured to learn from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source.

FIG. 1 is a block diagram representation of a system (10) for multimedia-based performance monitoring of an equipment in accordance with an embodiment of the present disclosure. The system (10) includes a multimedia-based monitoring device (20) located in proximity to the equipment. The device (20) is configured to perform edge computing by receiving a plurality of sound signals and visual data in real time from the equipment. As user herein, the term‘edge computing’ refers to a capability of the device (20) to perform local computing within the device (20) and without requiring a connectivity to the servers. As used herein, the term‘sound’ includes both audible and inaudible sounds. In other words,‘sounds’ includes sounds that are audible to humans, and sounds that are below the human audible range (subsonic) and sounds that are above the human audible range (ultrasonic). In one embodiment, the system (10) may include a casing (30) which is configured to house the multimedia-based monitoring device (20). In such embodiment, the casing (30) may include a memory device (40) and a plurality of peripheral devices (50). In one embodiment, the plurality of peripheral devices (50) includes a plurality of input/ output devices.

The device (20) includes a sound detection device (60) which is configured to collect the plurality of sound signals in real-time from the equipment. In some embodiment, the sound detection device (60) may include a microphone. In one embodiment, the sound detection device (60) may be located outside the casing (30). In another embodiment, the sound detection devices (60) may be located inside the casing (30). The device (20) also includes a visual detection device (70) which is operatively coupled to the sound detection device (60). The visual detection device (70) is configured to capture the visual data related to the equipment. In one embodiment, the visual data may include an image or a video. In a specific embodiment, the visual detection device (70) may include a still camera, a motion picture camera or a video camera.

Furthermore, the device (20) further includes a processor (80) operatively coupled to the sound detection device (60) and the visual detection device (70). In one embodiment, the processor (80) may be hosted on a cloud based server platform. The processor (80) is configured to receive the plurality of sound signals and the visual data and store the plurality of sound signals and the visual data in the memory device (40). The processor (80) is also configured to acquire one or more predefined analytics models associated with the equipment. In such embodiment, the one or more predefined analytics model may be stored in the memory device (40). In some embodiments, the one or more predefined analytics models may include one or more a sound classification-based models, based one or more acoustic feature-based models, one or more artificial neural network- based models, or one or more Deep Learning based models, or a combination thereof.

Moreover, the processor (80) is configured to analyse the plurality of sound signals and the visual data based on the one or more predefined analytics models. The processor (80) further combines the plurality of sound signals and the visual data to generate combined intelligence from the plurality of sound signals and the visual data using the one or more predefined analytics models. The processor (80) is further configured to identify performance of the equipment by combined intelligence of a plurality of analysed sound signals and an analysed visual data.

In addition, the device (20) further includes an analytical subsystem (90) which is operatively coupled to the processor (80). The analytical subsystem (90) is configured to update the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data. In one embodiment, the analytical subsystem (90) may receive the plurality of sound signals and learns from the plurality of sound signals continuously for detecting one or more faults in the future. The analytical subsystem (90) is also configured to learn from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source. In another embodiment, the analytical subsystem (90) learns from the plurality of sound signals and the visual data by adding an updated model from an external source, where the updated model from the external source are obtained based on a plurality of experiments performed on the equipment in a plurality of situations. The one or more predefined sound analytics models are updated by identifying a plurality of classes of the plurality of sounds based on the analysed result. In a specific embodiment, the one or more predefined analytics models may be trained to identify and classify the plurality of sound signals in real-time, to infer operational information about the equipment, as well as information about the quality of the operation and the health of the equipment. In one embodiment, the processor may also be configured to calculate one or more operational patterns to identify one or more operational statistics of the equipment. In one embodiment, the processor (80) may be configured to send a real time notification to an external computing device upon detecting one or more faults during the performance monitoring of the equipment. In such embodiment, the real time notification may include a graphic, a vibration, a short message service (SMS) message or an alarm sound. In some embodiments, the external computing device may include a mobile phone, a tablet, a laptop or a computer.

FIG. 2 is a block diagram representation of an exemplary system (10) for multimedia- based performance monitoring of an equipment (100) in accordance with an embodiment of the present disclosure. The system is configured to monitor performance of the equipment (100) using a multimedia-based monitoring device (20). In one embodiment, the equipment (100) may include one or more machine of a manufacturing plant, one or more air and gas pipeline or a vehicle. The system (10) includes a sound detection device (60), a visual detection device (70), a processor (80), an analytical subsystem (90), a memory device (40) and a plurality of peripheral devices (50) to monitor the performance of the equipment (100). For example, the multimedia-based device (20) is located in proximity to a manufacturing machine of a manufacturing plant. The sound detection device (60) of the multimedia-based device collects a plurality of sound signals in real time from the manufacturing machine.

In some embodiments, the sound detection device (60) may include a microphone. In a specific embodiment, the microphone may include dynamic, condenser, ribbon, crystal, or other types of microphones. The microphone may include various directional properties, such that microphone may receive sound inputs clearly. For example, microphone may include omnidirectional, bidirectional, and unidirectional characteristics, where the directionality characteristics indicate the direction in which microphone may detect sound. In one embodiment, the system (10) includes a casing (30) which is configured to house the multimedia-based monitoring device (20). In a specific embodiment, the sound detection device (60) may be a portable sound detection device and located outside the casing (30). In another embodiment, the sound detection device (60) may be located inside the casing (30).

The device (20) further includes a visual detection device (70) which captures visual data related to the manufacturing machine. In one embodiment, the visual detection device (70) may include a still camera, a motion picture camera or a video camera. In another embodiment, the visual detection device (70) may be a digital camera, a digital video camera, a high definition camera, an infrared camera, a night-vision camera, a spectral camera, or a radar imaging device. In a specific embodiment, the visual detection device (70) may include an image sensor device to convert optical images into electronic signals. The visual detection device (70) may be configured to move in various directions, for example, to pan left and right, tilt up and down, or zoom in and out on a particular target. In some embodiments, the visual data may include an image or a video. For example, the still camera captures a plurality of images of the manufacturing machine from one or more angles.

In an exemplary scenario, the visual detection device (70) captures the plurality of images based on sound detected by the sound detection device. Upon determining the location of detected sound, the visual detection device (70) captures an image of the source location of the detected sound. In one embodiment, the visual detection device (70) may zoom in on the source location of the detected sound when the source of the detected sound is determined to be far away.

The device (20) further includes a processor (80) which is operatively coupled to the sound detection device (60) and the visual detection device (80). The processor (80) is configured to receive the plurality of sound signals from the sound detection device (60) and the visual data from the visual detection device (70). The processor (80) further acquires one or more predefined analytics model associated with the manufacturing machine. In one embodiment, the one or more predefined analytics models may include a plurality of machine learning models. In some embodiments, the one or more predefined analytics models may include one or more a sound classification- based models, based one or more acoustic feature- based models, one or more artificial neural network- based models, or one or more Deep Learning based models, or a combination thereof. The processor (80) further analyses the plurality of sound signals and the visual data based on the one or more predefined analytics models. The processor (80) further identifies performance of the equipment by combined intelligence of a plurality of analysed sound signals and an analysed visual data. A plurality of diagnostic algorithms identifies and classify the plurality of sound signals using the one or more predefined analytics models to draw conclusions regarding the existence of one or more current or future problem condition and also draws statistics about operating condition of the machine. For example, upon detecting a sound that is classified as an explosion, the processor may trigger a fire alarm.

Furthermore, the processor (80) combines the plurality of sound signals and the visual data to generate combined intelligence from the plurality of sound signals and the visual data. In some embodiments, the processor (80) may also be configured to calculate a plurality of operational patterns of the machine and calculate a plurality of operation statistics such as efficiency and performance of the machine. In an exemplary embodiment, the processor (80) detects leakage of air or gas from a compressed air or gas pipeline or machinery using the leakage sound as the input. In another exemplary embodiment, the processor (80) is also configured to identify the plurality of sounds from a moving or stationary vehicle and generate intelligence about the health of the vehicle and the subsystems therein and predictive maintenance of the vehicle.

In addition, the device (20) further includes an analytical subsystem (90) which is operatively coupled to the processor (80). The analytical subsystem (90) is configured to update the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data. In one embodiment, the analytical subsystem (90) may receive the plurality of sound signals and learns from the plurality of sound signals continuously for detecting one or more faults in the future. The analytical subsystem (90) further updates the one or more learning models with one or more parameters such as health of the machine, predictive maintenance of the machine and tracking the operations of the machine.

The analytical subsystem (90) is also configured to learn from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source (110). In one embodiment, the external source (110) may include other systems, nodes, or databases. In such embodiment, the device (20) may connect to other systems, nodes, or databases to download and learn from the audio detection and response histories of other systems or nodes. The machine learning models are trained based on the combined intelligence of a plurality of extensive sound signals data and the visual data such as the plurality of images or the plurality of videos collected previously. The machine learning models are trained on the plurality of sound signals about the one or more faults in the manufacturing machine, as well as an operating condition of the manufacturing machine. In other words, a series of numerous experiments or field tests are performed on each device (20) to record the sounds of various fault and predictive fault conditions. The automatic troubleshooting using these one or more predefined sound analytics models may be refined based upon the experience and knowledge of expert technicians. Such one or more predefined sound analytics models may be embedded and associated in the memory device (40).

In one embodiment, the processor (80) may be configured to send a real time notification to an external computing device upon detecting one or more faults during the performance monitoring of the manufacturing machine. In such embodiment, the real time notification may include a graphic, a vibration, a short message service (SMS) message or an alarm sound. In some embodiments, the external computing device may include a mobile phone, a tablet, a laptop or a computer.

FIG. 3 is a flow chart representing the steps involved in a method for monitoring the performance of an equipment. The method (150) includes collecting, by a sound detection device, a plurality of sound signals in real-time from the equipment. In one embodiment, collecting, by a microphone, a plurality of sound signals in real-time from the machine in step 160. The method (150) also includes capturing, by a visual detection device, visual data related the equipment in step 170. In some embodiments, capturing, by the visual detection device, the visual data related the equipment may include capturing an image or a video of the equipment.

The method (150) further includes acquiring, by a processor, one or more predefined analytics models associated with the equipment in step 180. In a specific embodiment, acquiring, by the processor, the one or more predefined analytics models associated with the equipment may include acquiring one or more a sound classification-based models, based one or more acoustic feature -based models, one or more artificial neural network- based models, or one or more Deep Learning based models, or a combination thereof.

The method (150) further includes analysing, by the processor, the plurality of sound signals and the visual data based on the one or more predefined analytics models in step 190. The method (150) further includes identifying, by the processor, performance of the equipment by combining a plurality of analysed sound signals and an analysed visual data in step 200. In one embodiment, identifying, by the processor, performance of the equipment by combining the plurality of analysed sound signals and the analysed visual data may include identifying one or more faults of the equipment. In some embodiments, identifying the one or more faults of the equipment may include identifying one or more classes of the plurality of sound signals based on the combination of the plurality of sound signals and the visual data and the one or more predefined analytics models

The method (150) further includes updating, by an analytical subsystem, the one or more predefined analytics models based on combination of the plurality of sound signals and the visual data in step 210. In one embodiment, updating, by the sound analytics system, the one or more analytics models based on the plurality of sound signals may include updating the one or more analytics models upon calculating one or more operational patterns to identify one or more operational statistics of the equipment. The method (150) further includes learning, by the analytical subsystem, from the combination of the plurality of sound signals and the visual data by adding an updated analytics model from an external source in step 220.

In one embodiment, the method (150) may include sending a real time notification to an external computing device upon detecting one or more faults during the performance monitoring of the equipment.

Various embodiments of the system for multimedia-based performance monitoring of an equipment described above enables dynamic, continuously learning, always evolving methods which are better than other fixed function methods used in the existing devices. Such additional feature allows the system to continuously get better and better over time. The system includes non-intrusive and non-touch methods for performance monitoring. The system is trained to identify different sound types as well different images by adjusting the parameters within one or more subsystems, and thus the functionality can be adjusted and changed over time. The device is a standalone, independent, and complete product which listens to the plurality of sounds and generates intelligence from the plurality of sounds. The device has the necessary computational resources required to carry out the computation. In carrying out these computations, there is no dependency whatsoever on any external computation resources as compared to conventional devices.

It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.

While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.

The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.