Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TARGETED ENERGY USAGE DEVICE PRESENCE DETECTION USING MULTIPLE TRAINED MACHINE LEARNING MODELS
Document Type and Number:
WIPO Patent Application WO/2024/035553
Kind Code:
A1
Abstract:
Embodiments generate machine learning predictions to discover targeted device energy usage. Embodiments train a first machine learning model to predict a presence of a first device, where a training data used to train the first machine is deficient for a second device. Embodiments train a second machine learning model to predict a presence of a second device. Embodiments receive input data of household energy use and weather data and, based on the input data, use the trained first machine learning model to predict the presence of the first device per household. Based on the input data, embodiments use the trained second machine learning model to predict the presence of the second device per household. Embodiments then subtract the households predicted to have the second device from the households predicted to have the first device to generate a prediction of households that have the first device.

Inventors:
LIN JESSICA (US)
PIRUS ANASTASIIA (UA)
KRAVCHENKO SERHII (UA)
ZAGORUIKO ANDRII (CZ)
IELKIN DMYTRO (UA)
BIELOV OLEKSANDR (UA)
ANNENKOV OLEKSANDR (UA)
DEREVIANKO SERHII (UA)
AKOPOV SAAK (UA)
Application Number:
PCT/US2023/028804
Publication Date:
February 15, 2024
Filing Date:
July 27, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ORACLE INT CORP (US)
International Classes:
G06Q10/04
Other References:
CULLER DAVID ET AL: "Neural NILM : Deep Neural Networks Applied to Energy Disaggregation", PROCEEDINGS OF THE 2ND ACM INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, BUILDSYS '15, 4 November 2015 (2015-11-04), New York, New York, USA, pages 55 - 64, XP093098018, ISBN: 978-1-4503-3981-0, DOI: 10.1145/2821650.2821672
CIANCETTA FABRIZIO ET AL: "A New Convolutional Neural Network-Based System for NILM Applications", IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, IEEE, USA, vol. 70, 2 November 2020 (2020-11-02), pages 1 - 12, XP011827311, ISSN: 0018-9456, [retrieved on 20201218], DOI: 10.1109/TIM.2020.3035193
CIMEN HALIL ET AL: "Smart-Building Applications: Deep Learning-Based, Real-Time Load Monitoring", IEEE INDUSTRIAL ELECTRONICS MAGAZINE, IEEE, US, vol. 15, no. 2, 29 December 2020 (2020-12-29), pages 4 - 15, XP011862790, ISSN: 1932-4529, [retrieved on 20210624], DOI: 10.1109/MIE.2020.3023075
Attorney, Agent or Firm:
GOLDSMITH, Barry, S. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1 . A method for generating machine learning predictions to discover targeted device energy usage, the method comprising: training a first machine learning model to predict a presence of a first device, wherein a training data used to train the first machine learning model is deficient for a second device; training a second machine learning model to predict a presence of a second device; receiving input data of household energy use and weather data; based on the input data, using the trained first machine learning model, predicting the presence of the first device per household; based on the input data, using the trained second machine learning model, predicting the presence of the second device per household; and subtracting the households predicted to have the second device from the households predicted to have the first device to generate a prediction of households that have the first device.

2. The method of claim 1 , wherein the first device comprises an Electric Vehicle Level 1 Charger and the second device comprises a solar power generator.

3. The method of claim 1 , wherein the predicting the presence of the first device per household and predicting the presence of the second device per household comprises outputting a Yes, No or Maybe for each household.

4. The method of claim 3, wherein the Yes, No or Maybe are based on predefined thresholds.

5. The method of claim 2, wherein the training data used to train the first machine is deficient for the second device because the training data is generated from a plurality of households that do not have solar power generators.

6. The method claim 3, wherein the predicting the presence of the first device per household comprises a confidence data value that indicates a presence of target device energy usage within instances of source location energy usage, and an overall prediction comprises an overall confidence data value that indicates the presence of target device energy usage within the given source location’s energy usage over a period of time.

7. The method of claim 6, wherein generating the overall prediction comprises combining the confidence data values corresponding to multiple discovery predictions to generate the overall confidence data value.

8. The method of claim 1 , wherein the trained first machine learning model comprises one or more recurrent neural network layers and one or more convolutional neural network layers.

9. A computer readable medium having instructions stored thereon that, when executed by one or more processors, cause the processors to generate machine learning predictions to discover targeted device energy usage, the generating comprising: training a first machine learning model to predict a presence of a first device, wherein a training data used to train the first machine learning model is deficient for a second device; training a second machine learning model to predict a presence of a second device; receiving input data of household energy use and weather data; based on the input data, using the trained first machine learning model, predicting the presence of the first device per household; based on the input data, using the trained second machine learning model, predicting the presence of the second device per household; and subtracting the households predicted to have the second device from the households predicted to have the first device to generate a prediction of households that have the first device.

10. The computer readable medium of claim 9, wherein the first device comprises an Electric Vehicle Level 1 Charger and the second device comprises a solar power generator.

1 1 . The computer readable medium of claim 9, where the predicting the presence of the first device per household and predicting the presence of the second device per household comprises outputting a Yes, No or Maybe for each household.

12. The computer readable medium of claim 11 , wherein the Yes, No or Maybe are based on pre-defined thresholds.

13. The computer readable medium of claim 10, wherein the training data used to train the first machine learning model is deficient for the second device because the training data is generated from a plurality of households that do not have solar power generators.

14. The computer readable medium of claim 11 , wherein the predicting the presence of the first device per household comprises a confidence data value that indicates a presence of target device energy usage within instances of source location energy usage, and an overall prediction comprises an overall confidence data value that indicates the presence of target device energy usage within the given source location’s energy usage over a period of time.

15. The computer readable medium of claim 14, wherein generating the overall prediction comprises combining the confidence data values corresponding to multiple discovery predictions to generate the overall confidence data value.

16. The computer readable medium of claim 9, wherein the trained first machine learning model comprises one or more recurrent neural network layers and one or more convolutional neural network layers.

17. A system for discovering targeted device energy usage, the system comprising: a first trained machine learning model to predict a presence of a first device, wherein a training data used to train the first machine learning model is deficient for a second device; a second trained machine learning model to predict a presence of a second device; one or more processors configured to: receive input data of household energy use and weather data; based on the input data, use the trained first machine learning model to predict the presence of the first device per household; based on the input data, use the trained second machine learning model to predict the presence of the second device per household; and subtract the households predicted to have the second device from the households predicted to have the first device to generate a prediction of households that have the first device.

18. The system of claim 17, wherein the first device comprises an Electric Vehicle Level 1 Charger and the second device comprises a solar power generator.

19. The system of claim 17, wherein the predicting the presence of the first device per household and predicting the presence of the second device per household comprises outputting a Yes, No or Maybe for each household.

20. The system of claim 19, wherein the Yes, No or Maybe are based on predefined thresholds.

Description:
TARGETED ENERGY USAGE DEVICE PRESENCE DETECTION USING MULTIPLE TRAINED MACHINE LEARNING MODELS

[0001] One embodiment is directed to utility metering devices, and more particularly to machine learning predictions that discover the presence of a target device energy usage within household energy usage derived in part from information received from utility metering devices.

BACKGROUND INFORMATION

[0002] Non-intrusive load monitoring (“NILM”) and disaggregation of various energy usage devices at a given source location has proven challenging. For example, given a household, disaggregating device and/or electric vehicle energy usage from within the household’s general monitored energy usage has been difficult to achieve, in part due to the large variety of household devices and/or electric vehicles (e.g., make, model, year, etc.). Advancements in metering devices have provided some opportunities, yet successful disaggregation has still been elusive. The limited availability of labeled data sets, or source location energy usage values with labeled device energy usage values (e.g., energy usage values for a household labeled with energy usage values for appliance 1 , electric vehicle 1 , appliance 2, and the like), has further impeded progress. NILM and disaggregation techniques that can learn from these limited data sets to successfully predict energy usage of a target device from the general energy usage at a source location would therefore greatly improve the technological field and benefit users that implement these techniques.

SUMMARY

[0003] Embodiments generate machine learning predictions to discover targeted device energy usage. Embodiments train a first machine learning model to predict a presence of a first device, where a training data used to train the first machine is deficient for a second device. Embodiments train a second machine learning model to predict a presence of a second device. Embodiments receive input data of household energy use and weather data and, based on the input data, use the trained first machine learning model to predict the presence of the first device per household. Based on the input data, embodiments use the trained second machine learning model to predict the presence of the second device per household. Embodiments then subtract the households predicted to have the second device from the households predicted to have the first device to generate a prediction of households that have the first device. BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Further embodiments, details, advantages, and modifications will become apparent from the following detailed description of the embodiments, which is to be taken in conjunction with the accompanying drawings.

[0005] Fig. 1 illustrates a system for generating machine learning predictions to discover target device energy usage according to an example embodiment.

[0006] Fig. 2 illustrates a block diagram of a computing device operatively coupled to a system according to an example embodiment.

[0007] Fig. 3 illustrates an architecture for using a machine learning model to discover the presence of target device energy usage within household energy usage according to an example embodiment.

[0008] Figs. 4A-4C illustrate sample neural networks according to example embodiments.

[0009] Figs. 5A-5B illustrate sample graphs that represent device specific energy usage presence prediction results and accuracy according to example embodiments.

[0010] Fig. 6 illustrates an architecture for using multiple machine learning models to discover the presence of target device energy usage within household energy usage according to an example embodiment.

[0011 ] Fig. 7 illustrates a flow diagram for training a machine learning model to discover the presence of target device energy usage according to an example embodiment. [0012] Fig. 8 illustrates a flow diagram for generating machine learning predictions to discover target device energy usage according to an example embodiment.

[0013] Fig. 9 illustrates a graphical output generated by the analytics module and/or the output module of Fig. 6 in accordance to embodiments.

[0014] Fig. 10 illustrates a flow diagram for generating machine learning predictions of the presence of a target device using multiple trained machine learning models according to an example embodiment.

DETAILED DESCRIPTION

[0015] One embodiment detects the presence of a targeted device using two different trained machine learning models. The first model detects the presence of the targeted device and has been trained with a dataset generally lacking a first device. The second model detects the presence of the first device. The output of the first and second models are combined to provide a more accurate presence detection of the targeted device than what is provided by the first model alone.

[0016] Embodiments train at least two machine learning models using labeled energy usage data. For example, a machine learning model can be designed/selected, such as a neural network. Energy usage data from multiple source locations (e.g., households) can be obtained, where the energy usage data can be labeled with device specific energy usage. For example, the household energy usage values can cover a period of time, and within that period of time individual device energy usage values (e.g., appliance 1 , electric vehicle 1 , appliance 2, and the like) can be labeled. In some embodiments, this household and device specific energy usage can then be processed to generate training data for the machine learning model.

[0017] In some embodiments, instances of household energy usage can be received over a period of time. For example, household energy usage can be received at a certain granularity level (e.g., every 15 minutes, every 30 minutes, hourly, and the like) over a period of time (e.g., a week, two weeks, a month, and the like). In some embodiments, the trained machine learning model can generate multiple prediction instances for each instance of household energy usage data (e.g., four weekly predictions over a month of time). An overall prediction can then be generated based on the multiple prediction instances. For example, analytics may be performed on the prediction instances to arrive at an overall prediction about the presence of target energy usage within household energy usage over the period of time.

[0018] Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be apparent to one of ordinary skill in the art that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Wherever possible, like reference numbers will be used for like elements. [0019] Fig. 1 illustrates a system for generating machine learning predictions to discover target device energy usage according to an example embodiment. System 100 includes source location 102, meter 104, source locations 106, meters 108, devices

1 10, 1 12, and 114, and network node 1 16. Source location 102 can be any suitable location that includes or is otherwise associated with devices that consume or produce energy, such as a household with devices 110, 112, and 114. In some embodiments, devices 1 10, 1 12, and 1 14 can be electrical appliances and/or electrical vehicles that use energy, such as a washer, dryer, air conditioner, heater, refrigerator, television, computing device, and the like. For example, source location 102 can be supplied with power (e.g., electricity), and devices 110, 1 12, and 1 14 can draw from the power supplied to source location 102. In some embodiments, source location 102 is a household and the power to the household is supplied from an electric power grid, a local power source (e.g., solar panels), a combination of these, or any other suitable source.

[0020] In some embodiments, meter 104 can be used to monitor the energy usage (e.g., electricity usage) at source location 102. For example, meter 104 can be a smart meter, an advanced metering infrastructure (“AMI”) meter, an automatic meter reading (“AMR”) meter, a simple energy usage meter, and the like. In some embodiments, meter 104 can transmit information about the energy usage at source location 102 to a central power system, a supplier, a third party, or any other suitable entity. For example, meter 104 can implement two-way communication with an entity in order to communicate the energy usage at source location 102. In some embodiments, meter 104 may implement one-way communication with an entity, where meter readings are transmitted to the entity.

[0021] In some embodiments, meter 104 can communicate over wired communication links and/or wireless communication links, and can leverage wireless communication protocols (e.g., cellular technology), Wi-Fi, wireless ad hoc networks over Wi-Fi, wireless mesh networks, low power long range wireless (“LoRa”), ZigBee, Wi-SUN, wireless local area networks, wired local area networks, and the like. Devices 1 10, 1 12, and 114 (and other devices not depicted) can use energy at source location 102, and meter 104 can monitor the energy usage for the source location and report the corresponding data (e.g., to network node 116).

[0022] In some embodiments, source locations 106 and meters 108 can be similar to source location 102 and meter 104. For example, networking node 1 16 can receive energy usage information about source location 102 and source locations 106 from meter 104 and meters 106. In some embodiments, network node 1 16 can be part of a central power system, a supplier, a power grid, an analytics service provider, a third-party entity, or any other suitable entity.

[0023] The following description includes recitations of a criterion or criteria. These terms are used interchangeably throughout the disclosure, the scope of criteria is intended to include the scope of criterion, and the scope of criterion is intended to include criteria.

[0024] Embodiments use the total energy usage from a household provided by metering infrastructure (e.g., advanced metering infrastructure (“AMI”), simple metering infrastructure, and the like) to accurately predict the presence of target device energy usage within household energy usage. The domain of non-intrusive load monitoring (“NILM”) and other varieties of energy usage detection have drawn significant interest. Accurate device specific energy usage discovery (e.g., via NILM or NILM like techniques) provides many benefits including energy savings opportunities, personalization, improved electric grid planning, and more.

[0025] Embodiments utilize a deep learning scheme that can, based on limited or deficient training sets (e.g., based on a non-representative or poorly representative sample of households), accurately predict the presence of target device energy usage, such as energy usage from an electric vehicle or a household appliance. As disclosed herein, a “deficient” training set is a training set that is generated from a non fully representative sample of households. For example, a set of households used to generate a training dataset may include 0%, or 2% of households that have solar power, while a typical sample of households may include 5% - 10% that include solar power. The less representative the sample of households are, the more “deficient” the corresponding generated training set is considered. Accurate discovery can be challenging due to the variety of energy consuming devices, such as those in a common household (e.g., large electric appliances and electric vehicles) and their corresponding use conditions. In addition, in the NILM domain, the availability of training data can be limited. Therefore, learning schemes that can maximize the benefits of training datasets can be particularly effective. In embodiments, training data can be used to train learning models designed to effectively learn in these challenging conditions. Input to the learning models can be provided by AMI or non-AMI (e.g., simple infrastructure) along with other types of inputs.

[0026] Conventional NILM implementations with existing learning schemes have their own shortcomings. Some of the proposed approaches previously considered are built on combinatorial optimization, Bayesian methods, Hidden Markov Models, or Deep Learning. However, many of these models are not useful in real world scenarios because they suffer from various drawbacks. For example, some of these solutions are computationally expensive, and thus impractical. Others require high resolution/granularity input (e.g., AMI data or training data) with specific circumstances that is often unavailable or impractical given deployed metering capabilities.

[0027] Embodiments achieve several benefits over these conventional approaches. For example, embodiments support higher levels of accuracy, which continue to improve over time with newer data. Embodiments also implement machine learning models with improved generalization. For example, some model implementations are trained on a large and diverse set of energy usage data obtained from a variety of different locations, and improved results are obtained across various geographic locations for these implementations.

[0028] Embodiments also improve resource and time efficiency for model training and performance. For example, some deep learning models may have extensive resource requirements for training/performance, and these requirements can equate to hundreds of thousands of dollars, and in some cases millions of dollars. Embodiments achieve efficient resource and compute times for model training and performance. In addition, model scoring is similarly achieved under efficient timing requirements. For example, scoring can be achieved in milliseconds in some embodiments.

[0029] Fig. 2 is a block diagram of a computer server/system 200 in accordance with embodiments. All or portions of system 200 may be used to implement any of the elements shown in Fig. 1 . As shown in Fig. 2, system 200 may include a bus device 212 and/or other communication mechanism(s) configured to communicate information between the various components of system 200, such as processor 222 and memory 214. In addition, communication device 220 may enable connectivity between processor 222 and other devices by encoding data to be sent from processor 222 to another device over a network (not shown) and decoding data received from another system over the network for processor 222.

[0030] For example, communication device 220 may include a network interface card that is configured to provide wireless network communications. A variety of wireless communication techniques may be used including infrared, radio, Bluetooth®, Wi-Fi, and/or cellular communications. Alternatively, communication device 220 may be configured to provide wired network connection(s), such as an Ethernet connection.

[0031] Processor 222 may include one or more general or specific purpose processors to perform computation and control functions of system 200. Processor 222 may include a single integrated circuit, such as a micro-processing device, or may include multiple integrated circuit devices and/or circuit boards working in cooperation to accomplish the functions of processor 222. In addition, processor 222 may execute computer programs, such as operating system 215, prediction tool 216, and other applications 218, stored within memory 214.

[0032] System 200 may include memory 214 for storing information and instructions for execution by processor 222. Memory 214 may contain various components for retrieving, presenting, modifying, and storing data. For example, memory 214 may store software modules that provide functionality when executed by processor 222. The modules may include an operating system 215 that provides operating system functionality for system 200. The modules can include an operating system 215, a prediction tool 216 that implements target device presence prediction functionality disclosed herein, as well as other applications modules 218. Operating system 215 provides operating system functionality for system 200. In some instances, prediction tool 216 may be implemented as an in-memory configuration. In some implementations, when system 200 executes the functionality of prediction tool 216, it implements a non-conventional specialized computer system that performs the functionality disclosed herein.

[0033] Non-transitory memory 214 may include a variety of computer-readable medium that may be accessed by processor 222. For example, memory 214 may include any combination of random access memory (“RAM”), dynamic RAM (“DRAM”), static RAM (“SRAM”), read only memory (“ROM”), flash memory, cache memory, and/or any other type of non-transitory computer-readable medium. Processor 222 is further coupled via bus 212 to a display 224, such as a Liquid Crystal Display (“LCD”). A keyboard 226 and a cursor control device 228, such as a computer mouse, are further coupled to communication device 212 to enable a user to interface with system 200.

[0034] In some embodiments, system 200 can be part of a larger system. Therefore, system 200 can include one or more additional functional modules 218 to include the additional functionality. Other applications modules 218 may include various modules of Oracle® Utilities Customer Cloud Service, Oracle® Cloud Infrastructure, Oracle® Cloud Platform, Oracle® Cloud Applications, for example. Prediction tool 216, other applications module 218, and any other suitable component of system 200 can include various modules of Oracle® Data Science Cloud Service, Oracle® Data Integration Service, or other suitable Oracle® products or services.

[0035] A database 217 is coupled to bus 212 to provide centralized storage for modules 216 and 218 and to store, for example, data received by prediction tool 216 or other data sources. Database 217 can store data in an integrated collection of logically related records or files. Database 217 can be an operational database, an analytical database, a data warehouse, a distributed database, an end-user database, an external database, a navigational database, an in-memory database, a document-oriented database, a real-time database, a relational database, an object-oriented database, a non-relational database, a NoSQL database, Hadoop® distributed file system (“HFDS”), or any other database known in the art.

[0036] Although shown as a single system, the functionality of system 200 may be implemented as a distributed system. For example, memory 214 and processor 222 may be distributed across multiple different computers that collectively represent system 200. In one embodiment, system 200 may be part of a device (e.g., smartphone, tablet, computer, etc.). In an embodiment, system 200 may be separate from the device, and may remotely provide the disclosed functionality for the device. Further, one or more components of system 200 may not be included. For example, for functionality as a user or consumer device, system 200 may be a smartphone or other wireless device that includes a processor, memory, and a display, does not include one or more of the other components shown in Fig. 2, and includes additional components not shown in Fig. 2, such as an antenna, transceiver, or any other suitable wireless device component.

[0037] Fig. 3 illustrates an architecture for using a machine learning model to discover the presence of target device energy usage within household energy usage according to an example embodiment. System 300 includes input data 302, processing module 304, prediction module 306, training data 308, analytics module 310, and output data 312. In some embodiments, input data 302 can include energy usage from a source location, and the data can be processed by processing module 304. For example, processing module 304 can process input data 302 to generate features based on the input data.

[0038] In some embodiments, prediction module 306 can be a machine learning module (e.g., neural network) that is trained by training data 308. For example, training data 308 can include labeled data, such as energy usage data values from a plurality of source locations (e.g., source locations 102 and 106 from Fig. 1 ) that include labels that indicate the presence of target device energy usage. In some embodiments, the output from processing module 304, such as the processed input, can be fed as input to prediction module 306. Prediction module 306 can generate instances of target device presence predictions, such as multiple predictions over a period of time about the presence of target device energy usage within the overall source location energy usage.

[0039] In some embodiments, analytics module 310 can analyze the multiple instances of target device presence predictions. For example, a mathematical function can be used to combine the instances of the presence predictions to generate an overall prediction about whether source energy usage data measured over a period of time includes the presence of target device energy usage. In some embodiments, the overall predictions can be output data 312. For example, input data 302 can be source location energy usage data and output data 312 can be a prediction (e.g., confidence value) about the presence of target device energy usage within the source location energy usage data (e.g., over the period of time). As disclosed below, the confidence value can be translated into a “Yes” “No” or “Maybe” determination on the presence of a target device.

[0040] In some embodiments, analytics module 310 can be provided input from two different prediction modules, each from a different trained model, to more accurately predict the presence of a target device, such as EV L1 or an electric heater, via a combination of the two outputs. In some embodiments, the overall prediction for the target device can be output data 312.

[0041] Embodiments use machine learning models, such as neural networks, to predict the presence of target device energy usage. Neural Networks can include multiple nodes called neurons that are connected to other neurons via links or synapses. Some implementations of neural networks can be aimed at classification tasks and/or can be trained under supervised learning techniques. In many cases, labeled data can include features that help in achieving a prediction task (e.g., energy usage classification/prediction). In some embodiments, neurons in a trained neural network can perform a small mathematical operation on given input data, where their corresponding weights (or relevance) can be used to produce an operand (e.g., produced in part by applying a non-linearity) to be passed further into the network or given as the output. A synapse can connect two neurons with a corresponding weight/relevance. In some embodiments, prediction module 306 from Fig. 3 can be a neural network.

[0042] In some embodiments, a neural network can be used to learn trends within labeled (or surveyed) energy usage data (e.g., household energy usage data values, over a period of time, labeled with either device specific energy usage or a general device label). For example, the training data can include features and these features can be used by a neural network (or other learning model) to identify trends and predict the presence of target device energy usage from overall source location energy usage. In some embodiments, once a model is trained/ready it can be deployed. Embodiments can be implemented with a number of products or services (e.g., Oracle® products or services).

[0043] In some embodiments, the design of prediction module 306 can include any suitable machine learning model components (e.g., a neural network, support vector machine, specialized regression model, and the like). For example, a neural network can be implemented along with a given cost function (e.g., for training/gradient calculation). The neural network can include any number of hidden layers (e.g., 0, 1 , 2, 3, or many more), and can include feed forward neural networks, recurrent neural networks, convolution neural networks, modular neural networks, and any other suitable type.

[0044] Figs. 4A-4C illustrate sample neural networks according to example embodiments. Neural network 400 of Fig. 4A includes layers 402, 404, 406, 408, and 410. In some embodiments, neural network 400 can be a convolutional neural network with one or more of kernels 412, 414, 416, 418, 420, and 422. For example, at a given layer of a convolutional neural network, one or more filters or kernels can be applied to the input data of the layer. Kernels 412, 414, and 416 are illustrated as 1 -dimensional kernels (e.g., 1 x n) and kernels 418, 420, and 422 are illustrated as 2-dimensional kernels (e.g., n x m), but any other suitable shape can be implemented.

[0045] In some embodiments, layers 402, 404, and 406 are convolutional layers, kernel 412 is applied at layer 402, kernel 414 is applied at layer 404, and kernel 416 is applied at layer 406. In some embodiments, layers 402, 404, and 406 are convolutional layers, kernel 418 is applied at layer 402, kernel 420 is applied at layer 404, and kernel 422 is applied at layer 406. The shape of the data and the underlying data values can be changed from input to output depending on the shape of the applied filter or kernel (e.g., 1 x 1 , 1 x 2, 1 x 3, 1 x 4, 2 x 1 , 2 x 2, 2 x 3, 3 x 2, and the like), the manner in which the filter or kernel is applied (e.g., mathematical application), and other parameters (e.g., stride). In embodiments, kernels 412, 414, and 416 can have one consistent shape among them, two different shapes, or three different shapes (e.g., all the kernels are different sizes) and/or kernels 418, 420, and 422 can have one consistent shape among them, two different shapes, or three different shapes.

[0046] In some instances, the layers of a convolutional neural network can be heterogeneous and can include different mixes/sequences of convolution layers, pooling layers, fully connected layers (e.g., akin to applying a 1 x 1 filter), and the like. In some embodiments, layers 408 and 410 can be fully connected layers. Accordingly, embodiments of neutral network 400 illustrate a feed forward convolutional neural network with a number of convolution layers (e.g., implementing one or more dimensions of filters or kernels) followed by fully connected layers. Embodiments can implement any other suitable convolutional neural networks.

[0047] Neural network 430 of Fig. 4B includes layers 432, 434, 436, 438, 440, and 442, as well as kernels 444, 446, 448, 450, 452, and 454. Neural network 430 can be similar to neural network 400 of Fig. 4A, however layers 432, 434, and 436 can be convolutional layers with a parallel orientation in some embodiments, and layer 438 can be a concatenation layer that concatenates the output of layers 432, 434, and 436. For example, the input from an input layer can be fed into each of layers 432, 434, and 436, where the output from these layers is concatenated at layer 438.

[0048] In some embodiments, kernels 444, 446, and 448 can be similar to kernels 412, 414, and 416 of Fig. 4A and kernels 450, 452, and 454 can be similar to kernels 418, 420, and 422 of Fig. 4A. For example, kernels 444, 446, and 448 are illustrated as 1 -dimensional kernels and kernels 450, 452, and 454 are illustrated as 2- dimensional kernels, but any other suitable shape can be implemented. In embodiments, kernels 444, 446, and 448 can have one consistent shape among them, two different shapes, or three different shapes (e.g., all the kernels are different sizes) and/or kernels 450, 452, and 454 can have one consistent shape among them, two different shapes, or three different shapes.

[0049] In some instances, the layers of a convolutional neural network can be heterogeneous and can include different mixes/sequences of convolution layers, pooling layers, fully connected, parallel layers, concatenation layers, and the like. For example, layers 432, 434, and 436 can represent three parallel layers, however a larger or smaller number of parallel layers can be implemented. Similarly, the output from each of layers 432, 434, and 436 is depicted as input to layer 438, which is a concatenation layer in some embodiments, however one or more of layers 432, 434, and 436 can include additional convolutional or other layers prior to a concatenation layer. For example, one or more convolutional or other layers can be present between layer 432 (e.g., a convolutional layer) and layer 438 (e.g., a concatenation layer). In some embodiments, another convolutional layer (with another kernel) can be implemented between layer 432 and 438 while no such intervening layer is implemented for layer 434. In other words, in this example an input to layer 432 can pass through another convolutional layer prior to being input to layer 438 (e.g., a concatenation layer) while an input to layer 434 is output directly to layer 438 (without another convolutional layer). [0050] In some embodiments, layers 432, 434, 436, and 438 (e.g., three parallel convolutional layers and a concatenation layer) can represent a block within neural network 430, and one or more additional blocks can be implemented before or after the depicted block. For example, a block can be characterized by at least two parallel convolutional layers followed by a concatenation layer. In some embodiments, a number of additional convolutional layers (e.g., more than two) with various parallel structures can be implemented as a block. Neural network 430 illustrates an embodiment of a feed forward convolutional neural network with a number of convolution layers (e.g., implementing 1 -dimensional and/or 2-dimensional filters or kernels) with a parallel orientation followed by fully connected layers. Embodiments can implement any other suitable convolutional neural networks.

[0051] In some embodiments, the layers of neural networks 400 and 430 may be any other suitable neural network layers (e.g., layers that do not implement a convolutional kernel). For example, a mix of layers can be implemented in a neural network in some embodiments, where convolutional layers and/or convolutional neutral networks and/or convolutional blocks are implemented with other layers, networks, or blocks.

[0052] Fig. 4C depicts neural network 460, which can include blocks 462, 464, 466, and 468. In some embodiments, neural network 460 can be a recurrent neural network (“RNN”). For example, inputs 470, 472, 474, and 476 can represent an input sequence into neural network 460 (e.g., energy usage data as a sequence over time) and outputs 478, 480, 482 and 484 can represent an output sequence. In some embodiments, blocks 462, 464, 466, and 468 can be any suitable recurrent neural network block, such as a Long Short-Term Memory (“LSTM”) block, a Gated Recurrent Unit (“GRU”) block, a block of a second order RNN, a simple recurrent network block, a block with suitable weight matrices (e.g., weight matrix for input, weight matrix for calculation of hidden state, and weight matrix for calculation of output), a block with any suitable combination of weight matrices, activation functions, and/or gates, and the like.

[0053] In some embodiments, the blocks of neural network 460 pass a hidden state (or multiple hidden state values) when processing a sequence of input. For example, block 462 can calculate output 478 and a hidden state that is passed to block 464. Similarly, block 464 can calculate output 480 and a hidden state that is passed to block 466, and so on. In some embodiments, neural network 460 can include several more blocks (e.g., more than five) and include several more layers (e.g., more than one layer of blocks). In some embodiments, neural network 460 can be a bi-directional neural network (e.g., a hidden state is passed forward and backward among the blocks).

[0054] Embodiments implement a neural network that includes a heterogenous mix of architectures. For example, an implemented neural network may include layers of a CNN combined with layers of an RNN. In some embodiments, a CNN architecture (e.g., serial convolutional layers, parallel convolutional layers and/or concatenation layers, pooling layers, fully connected layers, and the like) is combined with an RNN architecture (e.g., blocks of an RNN neural network) such that the output from the CNN architecture serves as input to the RNN architecture. For example, input data (e.g., energy usage data as a sequence over time) may be processed by the CNN architecture, output from the CNN architecture can be input from the RNN architecture, and the RNN architecture may process its input data to generate a prediction result. In some embodiments, one or more fully connected layers (with a suitable activation function) can follow the CNN architecture, RNN architecture, or both. Other implementations can include other stacks of neutral network architectures, layers, and/or other suitable orientation of neutral network components.

[0055] In some embodiments, the neural network can be configured for deep learning, for example based on the number of neural network layers implemented. In some examples, a Bayesian network can be similarly implemented, or other types of supervised learning models. For example, a support vector machine can be implemented, in some instances along with one or more kernels (e.g., gaussian kernel, linear kernel, and the like).

[0056] In some embodiments, testing instances can be given to the model to calculate its accuracy. For example, referring back to Fig. 3, a portion (e.g., validation data) of training data 308/labeled or surveyed energy usage data can be reserved for testing the trained model (e.g., rather than training the model). The accuracy measurement can be used to tune prediction module 306. In some embodiments, accuracy assessment can be based on a subset of the training data/processed data. For example, a subset of the data can be used to assess the accuracy of a trained model (e.g., a 75% to 25% ratio for training to testing/validation, and the like). In some embodiments, the data can be randomly selected for the testing and training segments over various iterations of the testing.

[0057] In some embodiments, when testing, the trained model can output a prediction (e.g., confidence value, such as a number between 0 and 1 , or “Yes”, “No” or “Maybe”) that indicates the presence of target device energy usage within the given input (e.g., instance of testing data). For example, an instance of testing data can be energy usage data for a given source location (e.g., household) over a period of time that includes a label that indicates whether the target device energy usage is present. Because the presence of target device energy usage is known for the given input/testing instance, the predicted value can be compared to the known value to generate an accuracy metric. Based on testing the trained model using multiple instances of testing data, an accuracy for the trained model can be assessed.

[0058] In some embodiments, the design of prediction module 306 can be tuned based on accuracy calculations during training, retraining, and/or updated training. For example, tuning can include adjusting a number of hidden layers in a neural network, adjusting a kernel calculation (e.g., used to implement a support vector machine or neural network), and the like. This tuning can also include adjusting/selecting features used by the machine learning model, adjustments to the processing of input data, and the like. Embodiments include implementing various tuning configurations (e.g., different versions of the machine learning model and features) while trai ni ng/calcu lati ng accuracy in order to arrive at a configuration for prediction module 306 that, when trained, achieves desired performance (e.g., performs predictions at a desired level of accuracy, runs according to desired resource utilization/time metrics, and the like). In some embodiments, the trained model can be saved or stored for further use and for preserving its state. For example, the training of prediction module 306 can be performed “off-line” and the trained model can then be stored and used as needed to achieve time and resource efficient data prediction.

[0059] Embodiments of prediction module 306 are trained to identify the presence of target device energy usage within overall source location (e.g., household) energy usage data based on processed training data. An example of energy usage data that can be processed to generate training data 308 includes:

Table 1 : Pre-processed Source Location Energy Usage Data

[0060] A sample row of this example data includes the columns: identifier, timestamp, total (energy usage), and labeled device specific energy usage (e.g., air conditioner, electric vehicle, refrigerator, and the like). This example includes a granularity of 15 minutes, but other suitable granularities can similarly be implemented (e.g., 1 min, 5 mins, 30 mins, 1 hour, hours, a day, days, a week, weeks, a month, and the like). In some embodiments, processing the energy usage data (e.g., to generate training data 308) can include reducing a granularity of the data, for example so that it can be used to generate a training corpus with a consistent granularity (e.g., 1 hour, 1 day, 1 week, 1 month, and the like). Such granularity reduction can be achieved by summing the data usage values over the components that make up a unit of time (e.g., summing the data usage values over the 4 15-minute intervals that make up an hour). [0061] Embodiments include a target device (e.g., electric vehicle) and/or a set of target devices to be included within training data 308. For example, the training of prediction module 306 can be configured to identify the presence of target device energy usage within overall energy usage, and the training data can include labeled data usage for a sole target device or multiple target devices.

[0062] Embodiments include a set of devices to be included within training data 308. For example, the training of prediction module 306 can be configured to generate target device discovery predictions. However, the training can utilize labeled data usage for a set of other devices in addition to the target device. In some embodiments, the set of other devices can be based on the energy usage data and/or device specific labeled data values available for training purposes. Training data is often limited, and therefore training techniques that leverage the available training data are often beneficial. In some embodiments, the set of other devices used within the training techniques can be based on device diversity within available training data, the different combinations of devices at given source locations, and/or the frequency of appearance for different combinations of devices within available training data.

[0063] In some embodiments, pre-processing data can include selecting, from Table 1 , a subset of columns, a subset of rows, an aggregation (or some other mathematical/combinatorial function) of data, and other suitable processing. For example, data cleaning, normalization, scaling, or other processing used to render the data suitable for machine learning can be performed. In some embodiments, training data can include labeled data (e.g., target device labeled data with metered energy usage) and surveyed data (e.g., target device labeled data without metered energy usage). For example, labeled data can include total energy usage data at a source location as well as labeled energy usage data for the target device.

[0064] Surveyed data can include an indication of the presence of the target device (e.g., at the source location), such as indicated through a survey at a particular date. In some embodiments, survey responses about the target device can be associated with a temporal property (e.g., response date to the survey) and with a given source location (e.g., household) such that the surveyed data is represented by total energy usage at the given source location proximate to the temporal property with a general label that the data includes target device energy usage (rather than with specific target device energy usage values). In certain implementations, labeled energy usage data is highly useful but scare. Accordingly, surveyed energy usage data can be combined with labeled energy usage data to bolster the performance of the training machine learning model. For example, bolstering labeled energy usage data with surveyed energy usage data can improve general performance of the model and lead to more accurate results.

[0065] In some embodiments, input 302 and/or training data 308 can include information other than energy usage information. For example, weather information relative to the energy usage data (e.g., the weather at the time the energy usage was measured, such as precipitation, temperature, and the like), calendar information relative to the energy usage data (e.g., calendar information at the time the energy usage was measured, such as month, date, day of week, and the like), a time stamp relative to the energy usage data, and other relevant information can be included in input 302 and/or training data 308. For example, the other relevant information associated with an instance of source location energy usage data (e.g., over a day, a week, a month, or any other predetermined duration of energy usage data) can include average temperate, minimum temperature, maximum temperature, average dew point, minimum dew point, and/or maximum dew point.

[0066] Embodiments process energy usage data from source locations (e.g., households) to generate the training data 308 that is used to train prediction module 306. For example, the overall source location energy usage data values can be combined with labeled energy usage data values for one or more target devices (and in some implementations, surveyed energy usage data), and this resultant combination can be processed to arrive at training data 308. In some embodiments, the energy usage data for source locations can be obtained via measurement (e.g., metering). In addition, measurement, metering, or some other technique for receiving/monitoring energy usage for specific devices within the source location can be implemented to generate the device specific labeled energy usage data for training. In other examples, energy usage data that includes source location energy usage and disaggregated target device specific energy within the source location can be obtained from a third party. For example, training data can be obtained in any suitable manner, such as by monitoring source locations (e.g., households) in known circumstances, obtaining data sets publicly (or otherwise) available, developing a joint venture or partnership that results in the training data, and through any other suitable means. [0067] In some embodiments, multiple instances of source location energy usage data (e.g., that each cover a predetermined duration of time) can be input to trained prediction module 306 such that multiple predictions (e.g., a prediction for each instance) can be generated. For example, an instance of input data fed into trained prediction module 306 can be a week (or a month) of source location energy usage data (e.g., along with other relevant associated information, such as weather, calendar, and the like), and a prediction (e.g., confidence data/interval value between 0 and 1 , a determination of “Yes”, “No” or Maybe”, or any other suitable numeric range) can be generated that indicates the presence of target device energy usage within the source location energy usage (e.g., over the week, or other suitable duration of time). In some embodiments, an instance of source location energy usage data can be broken into a given granularity (e.g., 15 minute, 30 minute, hourly, multiple hours, daily, multiple days, weekly, monthly, and the like) over the duration of time (e.g., week or month). In other embodiments, the instance of source location energy usage data can have a single data value (or few data values) for the entire duration of time (e.g., energy usage data value that corresponds to a monthly bill).

[0068] In some embodiments, prediction module 306 can be fed these multiple instances of input data (e.g., multiple weeks or months) such that multiple instances of presence predictions (confidence data values) are generated. For example, analytics module 310 can combine the instances of the presence predictions to generate an overall prediction about whether source energy usage data measured over a period of time (e.g., the period of time that spans, or is otherwise associated with, the multiple instances of input data used to generate the instances of presence predictions) includes the presence of target device energy usage. In some embodiments, a mathematical function can be used to combine the instances of the presence predictions to generate an overall prediction about whether source location energy usage data measured over the period of time includes the presence of target device energy usage. For example, the instances of presence predictions can be data values between a given range (e.g., between 0 and 1 ) and the combining can include an average, minimum, maximum, k- max average, a combination of these, and any other suitable combination technique.

[0069] In some embodiments, the combined data value can represent the overall prediction for the presence of target device energy usage over the period of time (e.g., using a value within a predetermined range, such as 0 to 1 ), and can be output as output data 312. For example, input data 302 can be multiple instances of source location energy usage data that span a period of time (e.g., multiple weeks of data that span a few or several months) and output data 312 can be an overall prediction (e.g., confidence data value) that combines multiple instances of presence predictions, where the overall prediction indicates the presence of target device energy usage within the source location energy usage data over the period of time.

[0070] In some embodiments, the output data 312 can be compared to a criteria or threshold (e.g., configurable threshold) to determine whether the overall prediction is a positive (e.g., that the source location energy usage data includes target device energy usage) or a negative (e.g., that the source location energy usage data does not include target device energy usage). Figs. 5A-5B illustrate sample graphs that represent device specific energy usage presence prediction results according to example embodiments. The data depicted in the sample graphs represents tested embodiments disclosed herein for predicting the presence of target device energy from total energy usage at an unseen source location (e.g., household). Fig. 5A depicts a graphical representation of total energy usage data and a predicted confidence data value of the presence of target device energy usage according to some embodiments. In graph 502, time is represented on the x-axis while energy usage (in kWh) is represented on the y-axis.

[0071] With reference to Fig. 5A, the trained prediction model can receive (as input) the total energy usage data values (e.g., processed input data) and generate the graphically represented predictions (e.g., confidence value that the energy usage data includes target device energy usage). The total energy usage data values that serve as input data can include energy by a plurality of devices, which can sometimes include the target device(s). In some embodiments, the presence predictions depicted in graph 502 can be instances of presence predictions based on instances of input data (e.g., energy usage data over a predetermined duration of time, such as a week).

[0072] Graph 502 depicts that the trained prediction model begins to predict a high confidence value around June 2019. In other words, around June 2019 (and thereafter) the trained prediction models positively discover the presence of target device energy usage within the overall source location energy usage. Some embodiments of the predicted confidence values for the target device achieve a high degree of accuracy over multiple weeks of time and other embodiments achieve a high degree of accuracy over multiple months of time, for example depending on the granularity of the input/training data. Any other suitable data granularities, periods of time, or other suitable parameters can be implemented.

[0073] Embodiments implement a threshold confidence data value that indicates the presence of target device energy usage. For example, Fig. 5A demonstrates first output confidence data values prior to June 2019 and second output confidence data values after June 2019, where the first output confidence data values do not indicate the presence of target device energy usage and the second output confidence data values indicate the presence of target device energy usage. Some embodiments utilize a criteria or threshold, where any confidence value greater than the threshold is indicative of a positive prediction for presence of target device energy usage.

[0074] Fig. 5B illustrates graphical output data generated by a trained prediction model according to some embodiments. For example, graph 504 is a precision-recall curve for the output from a trained prediction model and graph 506 is a receiver operator characteristic (“ROC”) curve for the output from a trained prediction model. Graphs 504 and 506 illustrate the accuracy of these outputs based on true positives (“TP”), false positives (“FP”), true negatives (“TN”), and/or false negatives (“FN”).

Graph 504 plots recall (e.g., TP / (TP + FN)) on the x-axis and precision (e.g., TP / (TP + FP)) on the y-axis while graph 506 plots False Positive Rate (e.g., FP / (FP + TN)) on the x-axis and True Positive Rate (e.g., TP I (TP + FN)) on the y-axis. In some embodiments, a threshold can be configured based on the accuracy relationships depicted in graphs 504 and/or 506. [0075] Some embodiments implement a flexible criteria that can include multiple thresholds based on objective and use case. For example, a “high precision” use case has a relatively higher threshold that is computed empirically (e.g., based on the output data from a trained model achieving high precision) while a “high reach” use case has a relatively lower threshold that is computed empirically (e.g., based on the output data from a trained model balancing precision with a number of source locations captured by the threshold). Other objectives and/or use cases and corresponding thresholds that aim to optimize these objectives/use cases can similarly be implemented. In some embodiments, a threshold value can be derived using precision-recall values for trained model, ROC values for a trained model, a combination of these, or any other suitable technique can be used to calculate a threshold value (or threshold values).

[0076] In some embodiments, multiple machine learning models can be trained, and the outputs of these models can be combined to achieve target device presence prediction. Fig. 6 illustrates an architecture for using multiple machine learning models to discover the presence of target device energy usage within household energy usage according to an example embodiment.

[0077] System 600 includes input data 602, processing module 604, prediction modules 606 and 610, training data 608 and 612, analytics module 614, and output 616. In some embodiments, input data 602 can include energy usage from a source location, and the data can be processed by processing module 604. For example, processing module 604 can process input data 602 to generate features based on the input data. In some embodiments, input data 602 and processing module 604 can be similar to input data 302 and processing module 304 of Fig. 3

[0078] In some embodiments, prediction modules 606 and 610 can be machine learning modules (e.g., neural networks) that are trained by training data 608 and 612, respectively. For example, training data 608 and 612 can include labeled data, such as energy usage data values from a plurality of source locations (e.g., source locations 102 and 106 from Fig. 1 ) that include labeled target device specific energy usage data values and/or target device surveyed energy usage data. The output from processing module 604, such as the processed input, can be fed as input to prediction modules 606 and 610. Embodiments of predictions modules 606 and 610 can be similar to prediction module 306 of Fig. 3.

[0079] In some embodiments, training data 608 and 612 can train prediction modules 606 and 610 to predict the presence of target device energy usage from overall source location energy usage. Once trained, prediction modules 606 and 610 may be configured for different varieties of target device presence prediction. For example, training data 608 can train prediction module 606 to discover the presence of a target device given a first set of input data, such as source location energy usage data with a first granularity (e.g., hourly granularity in weekly chunks) while training data 612 can train prediction module 610 to discover the presence of a target device given a second set of input data, such as source location energy usage data with a second granularity (e.g., monthly granularity with a single or few data values). In some embodiments, prediction module 606 can be configured to discover a first variety of target device energy usage (e.g., one or a set of target devices, a particular type of energy usage of a target device, and the like) and prediction module 610 can be configured to discover a second variety of target device energy usage.

[0080] In some embodiments, prediction module 606 can generate multiple instances of first target device presence predictions based on input data 602 and/or prediction module 610 can generate multiple instances of second target device presence predictions based on input data 602. These predictions from prediction modules 606 and 610 can be input to analytics module 614, which can generate one or more target device prediction(s) as output data 616.

[0081] In some embodiments, analytics module 614 can be similar to analytics module 310. For example, analytics module 614 can combine multiple instances of presence predictions from either or both of prediction modules 606 and 610 to generate an overall presence prediction. In some embodiments, prediction module 606 is trained to predict the presence of a first variety of target devices, prediction module 610 is trained to predict the presence of a second variety of target devices, and analytics module 614 combines these predictions from each module to arrive at an overall presence prediction. For example, input data 602 can be overall energy usage data from a source location that is processed by processing module 604 and fed into trained prediction module 606 and trained prediction module 610.

[0082] In some embodiments, each prediction module can generate multiple instances of presence predictions (e.g., confidence data values) that are combined by analytics module 614. Output 616 can be a single confidence data value that indicates the presence of a target device or multiple confidence data values that indicate the presence of different target devices. For example, the instances of presence predictions from prediction module 606 can be combined by analytics module 614 to arrive at a confidence data value for a first target device and the instances of presence predictions from prediction module 610 can be combined by analytics module 614 to arrive at a confidence data value for a second target device. In some embodiments, the instances of presence predictions from prediction modules 606 and 610 can be both combined by analytics module 614 to arrive at a confidence data value for a target device.

[0083] Embodiments realize several next level advantages that provide further enhancements to target device presence predictions. For example, embodiments that implement a flexible threshold can be optimized for precision, reach, a combination of these, or for any other suitable use case. Some embodiments can accept a variety of different types of input data/training data. For example, a given trained prediction model can be configured to utilize energy usage data at a first granularity (e.g., hourly granularity using weekly chunks) while another trained prediction model can be configured to utilize energy usage data at a second granularity (e.g., monthly chunks of data with a monthly or weekly granularity). These embodiments are configured to utilize high resolution data when available, thus optimizing predictions using the best available data, and to utilize lower resolution data when high resolution data is not available, thus providing a robust solution that works under a variety of conditions. [0084] Embodiments can be configured to discover the presence of energy usage from a variety of target devices. For example, using one or more trained predictions models, embodiments can discover the presence of energy usage (within overall source location energy usage) for the following target devices:

• Battery Electric Vehicles (L1 standard and L2 fast charging: 120-V and 240-V)

• Plug-in Hybrid Electric Vehicles (L1 standard and L2 fast charging: 120-V and 240-V)

• Water Heaters

• Washers

• Dryers

• Pool Pumps

• Electric Heating

• Electric Cooling

• Heating, Ventilation, and Air Conditioning (HVAC) Devices

• Photovoltaic/Solar Panels

[0085] Embodiments are also highly extensible in that a variety of additional target devices can be supported with the relevant training data and are highly scalable in modern cloud computing environments. Implemented embodiments have been used to predict target device presence at millions of source locations with promising results.

[0086] Embodiments also support improvements to electrical grid infrastructure. In practice, the demand for electricity experiences peak loads that can cause high costs to utilities that implement electric grids as well as high environmental costs. To address this issue, utilities aim to better distribute electricity demand and alleviate the risks of peak loads. Because embodiments can identify target device energy consumption from household energy consumption, utilities can target customers with certain target devices in order to accomplish this load distribution. For example, electric vehicle charging may be identified as a factor that exacerbates or mitigates peak load. Embodiment support the discovery of utility customers with electric vehicles, and utilities can then target these customers with load distribution campaigns.

[0087] For example, to serve customers better while balancing loads on the electric grid, time of use (“TOU”) rates can be used that ease the demand on the grid by shifting some of the load to off peak times. TOU rates can include incentives for customers to charge electrical vehicles off peak, such as a “Residential EV Rate” which is priced to induce customers to practice more efficient energy consumption behavior. Further, the discovery of electric vehicle presence at utility customer premises can be a factor in grid planning. For example, the burden that electric vehicle charging places on the electrical grid can be better understood when the number of utility customers with electric vehicles can be reliably estimated. In turn, when planning the electrical grid infrastructure of the future, utilities can better account for the anticipated rise in the number electrical vehicles (and other high capacity battery devices).

[0088] Embodiments can also support utilities’ efforts to migrate customers from natural resource (e.g., gas or fuel) appliances to electrical appliances. For example, because embodiments can discover the presence of target devices, such as electrical appliances, embodiments can also reliably discover the lack of such an electrical device. Once it is understood that a customer lacks an electrical appliance, utilities can target these customers with campaigns to transition them from a gas appliance (e.g., clothes dryer or from water heater) to electric ones. Often, natural resource appliances (especially those that are aging) are less energy efficient than their electrical counterparts, and thus such campaigns can improve overall energy efficiency across a region.

[0089] Fig. 7 illustrates a flow diagram for training a machine learning model to discover the presence of a target device energy usage (and consequently the presence of that targeted device) according to an example embodiment. In embodiments, for each specific device for which the presence is desired to be detected/predicted, a separate machine learning model will be trained in accordance with the functionality of Fig. 7. For example, a separate trained machine learning model is used to detect the presence of electric heating, a different trained machine learning model is used to detect the presence of a Level 1 EV charger, and so on. The presence model predictions are based on the confidence interval data of whether or not a device is present. In some embodiments, the functionality of Figs. 7 (and Figs. 8 and 10 below) can be implemented by software stored in memory or other computer-readable or tangible medium, and executed by a processor. In other embodiments, each functionality may be performed by hardware (e.g., through the use of an application specific integrated circuit (“ASIC”), a programmable gate array (“PGA”), a field programmable gate array (“FPGA”), etc.), or any combination of hardware and software. In embodiments, the functionality of Figs. 7, 8 and 10 can be performed by one or more elements of system 200 of Fig. 2.

[0090] At 702, energy usage data from a plurality of source locations can be received, the energy usage data including energy usage by a target device and one or more other devices. For example, the energy usage data can be similar to the data illustrated in Table 1 above, or can include a subset of columns from Table 1 . In some embodiments, the received data can include a timestamp, overall energy usage at a source location (e.g., household), and labeled energy usage data values for a target device or a surveyed indication of target device energy usage (e.g., a general label without target device energy usage specific data values). The energy usage data can be received by monitoring energy usage, from a third party, based on a joint venture, or through any other suitable channel or entity.

[0091] At 704, a machine learning model can be configured. For example, a machine learning model, such as a neural network, CNN, RNN, Bayesian network, support vector machine, a combination of any of these, or any other suitable machine learning model, can be configured. Parameters such as a number of layers (e.g., a number of hidden layers), an input shape, an output shape, a breadth, a depth, a direction (e.g., feed forward or bi-directional), activation functions, a type of layer or unit (e.g., gated recurrent unit, long-short term memory, and the like), or other suitable parameters for the machine learning model can be selected. In some embodiments, these configured parameters can be tuned (e.g., adjusted, wholly changed, added, or removed) when training the model. [0092] In some embodiments, the machine learning model can include a CNN. In this case, parameters such as the types of layers (e.g., convolutional, pooling, fully connected, and the like), kernel size and type, stride, and other parameters can also be configured. In some embodiments, the machine learning model can include an RNN. In this case, parameters such as the type of unit (e.g., simple RNN, GRU, LSTM, and the like), direction, sequence size, depth, and other parameters can also be configured. In some embodiments, the machine learning model can include one or more CNN layers and one or more RNN layers. The CNN and/or RNN configured parameters can also be tuned when training the model.

[0093] At 706, the energy usage data can be processed to generate training data. For example, energy usage data from source locations can be cleaned, normalized, and otherwise processed to generate consistent training data (e.g., energy usage data at a given granularity, such as hourly). In some embodiments, chunks of training data can be generated that cover a predetermined duration of time (e.g., a week or a month), where the chunks of training data include source location energy usage data at the given granularity over the duration of time. In some embodiments, the target device can be at least one of an electric vehicle, a water heater, a washer, a dryer, a pool pump, an electric heating device, an electric cooling device, a heating, ventilation, and air conditioning (“HVAC”) device, and a photovoltaic device, and the training data includes features of the target device energy usage.

[0094] At 708, the machine learning model can be trained using the generated training data to predict the presence of target device energy usage. The training can include generation of predictions, loss calculation (e.g., based on a loss function), and gradient propagation (e.g., through layers/neurons of the machine learning model). As discussed herein, labeled energy usage for the target device and/or surveyed target device energy usage can be used to train the machine learning model.

[0095] In some embodiments, the trained machine learning model is trained using energy usage values for a plurality of source locations (e.g., households), and the training can optimize for the presence predictions of target device energy usage. In some embodiments, the training data, in some cases including labeled energy usage values from the target device(s), has a substantially hourly granularity. Other suitable granularities (e.g., 1 -minute, 15-minute, 30-minute, 45-minute, and the like) can similarly be implemented.

[0096] In some embodiments, the training data can have a low resolution granularity (e.g., monthly or billing granularity) and one or more models can be trained using this low resolution granularity data. For example, a first machine learning model can be trained using a first granularity (e.g., hourly granularity and weekly chunks) while a second machine learning model can be trained using a second granularity (e.g., monthly chunks with a single or few energy usage data values). In some embodiments a single machine learning model can be trained for the first granularity, the second granularity, or a combination of these.

[0097] In some embodiments, the one or more trained machine learning models include one or more recurrent neural network layers and one or more convolutional neural network layers. Embodiments of the machine learning model(s) and/or utilized training data can be tuned based on the results of training. For example, testing of a trained model can indicate an accuracy of the trained model, and based on the testing accuracy various tuning adjustments can be made.

[0098] At 710, the trained machine learning model(s) can be stored. For example, one or more trained learning models that generate predictions meeting a criteria (e.g., accuracy criteria or threshold) can be stored such that the stored model can be used to predict the presence of target device energy usage.

[0099] Fig. 8 illustrates a flow diagram for generating machine learning predictions to discover target device energy usage according to an example embodiment. For example, one or more machine learning models that are trained based on the functionality of Fig. 7 can be used to perform the functionality of Fig. 8.

[00100] At 802, multiple instances of source location energy usage can be received, for a given source location, over a period of time. For example, the source location (e.g., household) energy usage data can be broken up into time intervals based on timestamps (e.g., at a substantially hourly granularity) over a period of time, such as a day, a week, a month, or the like. Other suitable granularities can be implemented. In some embodiments, low resolution energy usage data can be received for the source location. For example, energy usage data over a month can be received, and the granularity for the data can include one or a few values (e.g., daily, monthly, or weekly).

[00101] In some embodiments, the source location energy usage data can be processed. For example, the processing can be similar to the processing of the training data (described with reference to Fig. 7). In such an example, the processing can alter the household energy usage input data to be similar to the training data, and thus the trained machine learning model can achieve enhanced prediction results. The processing can include achieving a given granularity for the energy usage data (e.g., hourly), normalization, other forms of scaling, segmentation (e.g., into weekly or monthly chunks), and any other suitable processing.

[00102] At 804, using the trained machine learning model, multiple discovery predictions for the received instances of source location energy usage can be generated, the discovery predictions representing a prediction about a presence of target device energy usage within the instances of source location energy usage. For example, processed data (e.g., multiple instances of source location energy usage at a given granularity) can be provided as input data to the trained machine learning model, and the model can generate instances of target device discovery predictions that correspond to each instance of source location energy data.

[00103] In some embodiments, the source locations are households and the received source location energy usage has at least an hourly granularity. For example, the instances of source location energy data can be household energy usage data at an hourly granularity over a week of time, and the period of time can be at least four weeks. In some embodiments, the received source location energy usage data at one of a monthly granularity, weekly granularity, and bi-monthly granularity over a month of time. For example, the period of time can be at least eight months and the instances of source location energy data can be household energy usage data at a monthly, weekly, or bi-monthly granularity over a month of time (e.g., each instance includes one or a few data values). In other examples, the period of time can be any of four months, three, months, two months, 6 weeks, and the like, and any suitable granularity can be implemented.

[00104] In some embodiments, each instance of a discovery prediction generated by the trained machine learning model is based on a corresponding instance of source location energy usage (e.g., hourly granularity over a week of time) and weather conditions associated with the corresponding instance of source location energy usage (e.g., whether conditions over the week). For example, the weather conditions are relative to the source location (e.g., household) and the weather conditions include one or more of an average temperature, a maximum temperature, and a minimum temperature, where the weather conditions relate to the duration of time associated with the corresponding instance of source location energy usage (e.g., calendar data). In some embodiments, weather conditions can include dew point, humidity, relative humidity, solar irradiance, and other suitable weather data.

[00105] At 806, the multiple discovery predictions can be analyzed and at 808, based on the multiple discovery predictions, an overall prediction about a presence of target device energy usage within the given source location’s energy usage over the period of time can be generated. For example, generating the overall prediction can include combining the confidence data values corresponding to the multiple discovery predictions to generate the overall confidence data value. In some embodiments, the instances of presence predictions can be data values between a given range (e.g., between 0 and 1 ) and the combining can include an average, minimum, maximum, k- max average, a combination of these, and any other suitable combination technique.

[00106] In some embodiments, the discovery predictions are confidence data values that indicate the presence of target device energy usage within the instances of source location energy usage, and the overall prediction is an overall confidence data value that indicates the presence of target device energy usage within the given source location’s energy usage over the period of time. For example, generating the overall prediction can include combining the confidence data values corresponding to the multiple discovery predictions to generate the overall confidence data value. In some embodiments, the overall confidence data value is compared to a criteria or threshold, and the presence of target device energy usage is positively discovered when the overall confidence data value meets or exceeds the criteria or threshold.

[00107] Once one or more models are trained via the functionality of Fig. 7, then “live” input data can be provided to the model in order to predict the presence of a targeted device.

[00108] In embodiments, each prediction module (e.g., 306), corresponding to each separately trained model, outputs a prediction of the presence of a targeted device in terms of an overall confidence value disclosed above. The confidence value may be converted to a prediction of “Yes” (or “Very Likely”), “No” or “Maybe” (or “Somewhat Likely”). For example, if the confidence value is a number between 0 and 1 , a “Yes” or “Very Likely” may be greater than .7, a “No” may be less than .2, and a “Maybe” or “Somewhat Likely” may be between .2-.7. Table 2 below provides some example confidence intervals:

Table 2

[00109] Embodiments, via for example analytics module 614 and/or output module 616 of Fig. 6, may also generate a graphical output of the presence detection for multiple targeted devices. Fig. 9 illustrates a graphical output 900 generated by analytics module 614 and/or output module 616 of Fig. 6 in accordance to embodiments. Fig. 9 is a bar chart showing the number of predicted households that have electric heating, a Level 1 EV Charger, or a Level 2 EV charger. The predictions are categorized as “Very Likely” (bars 901 -903), “Somewhat Likely” (bars 905-908) or “No” (the absence of bars).

[00110] The availability of complete and robust training data 608, 612 used to train each of the models shown in Fig. 6 (e.g., prediction modules 606, 610) may be limited. Sets of training data are typically received using AMI generated data from households or a utility. Sometimes, these households are not representative or robust for predicting the presence of a particular device. For example, if the source of the training data is from households in an area of the country that includes very little sunlight, then generated corresponding training data likely includes data for none or very few households that have solar panels for generating electricity (i.e., solar powered). A trained model that was trained using that training data can be used to predict the presence of electric heat. However, if the input/live data comes from a subset of households whose homes are powered by solar power, the resulting prediction of the presence of electric heat will likely be inaccurate due to the deficient training data that was used to train the model.

[00111] Fig. 10 illustrates a flow diagram for generating machine learning predictions of the presence of a target device using multiple trained machine learning models according to an example embodiment.

[00112] Specifically, in embodiments, at 1002 a first model is trained via the functionality of Fig. 7 to predict the presence of an EV Level 1 Charger (“L1 charger”, i.e. , a “first device”). However, the first trained model was trained using training data from a set of households that do not have, or disproportionality do not have, solar power (i.e., “deficient” training data for a “second device”). Therefore, input/live data for a set of households for which a subset do have solar power would not generate an accurate prediction of the presence of an L1 charger using the first model.

[00113] At 1004, a second model is trained via the functionality of Fig. 7 to predict the presence of solar power generator (e.g., solar panels). The second model is properly trained using representative training data.

[00114] At 1006, the input/live data for both the first and second model includes AMI data for a service territory of households. For example, AMI data for households in Southern CA. In embodiments, the input data further includes weather data/conditions. For example, the weather conditions are relative to the source location (e.g., household) and the weather conditions include one or more of an average temperature, a maximum temperature, and a minimum temperature, where the weather conditions relate to the duration of time associated with the corresponding instance of source location energy usage (e.g., calendar data). In some embodiments, weather conditions can include dew point, humidity, relative humidity, solar irradiance, and other suitable weather data. The input data can include household energy usage data over time.

[00115] At 1008, in response to the input data, the first trained model generates predictions for each household of the presence of an L1 charger. Each prediction can be classified as a Yes, No, or Maybe using a confidence data value as disclosed above. The prediction can be in the form of a database table with a listing of each household, by ID, and the corresponding prediction.

[00116] At 1010, in response to the input data, the second trained model generates predictions for each household of the presence of solar power. Each prediction can be classified as a Yes, No, or Maybe. Similar to the first trained model, the prediction can be in the form of a database table with a listing of each household, by ID, and the corresponding prediction.

[00117] At 1012, embodiments combine the two sets of predictions by subtracting the households who are predicted to have solar power (i.e., a Yes prediction) from the second trained model out of all of the households from the first trained model. Therefore, those households are removed from the predictions of the first model, and predictions for those removed households are not used. The result is a more accurate/improved prediction of the presence of an L1 charger from the remaining households than the prediction from only the first trained model. [00118] In an example other embodiment, for electric heating, the model was not scored/trained on solar and the model usually gives many more “Yes” values than EV L1 or EV L2. As a result, embodiments then score the solar model and subtract both the “Yes” and the “Maybe” values out. Because this model tends to score more households with electric heating, in general, compared with electric vehicles, improved results are derived from filtering both “Yes” and “Maybe” from the solar results.

[00119] Table 3 below provides an example of the number of Yes, No and

Maybe results for “Solar” prediction model (“second model”), “EV L1 before filtering” (“first model”) and “EV L1 after filtering” (i.e. , results after the combining functionality of 1012). For EV L1 , the “Yes” Solar customers are subtracted out. Table 3 provides another example of the number of Yes, No and Maybe results for “Elec Heating before filtering” (“first model”) and “Elec Heating after filtering” (i.e., results after the combining functionality of 1012). For Elec Heating, the “Yes” and “Maybe” Solar customers are subtracted out. The numbers shown is not the number of unique customers, but the number of presence discovery occurrences over time. Each example is shown for two utilities: “Utilityl ” and “Utility2”. I Maybe | 0 | 397405 | 397359 | 46 | 0 | 0 | 0 |

Table 3

[00120] The information in Table 3 reflects embodiments, that, if for a household the presence discovery model shows Solar=Yes, this household is filtered out from EV L1.

[00121] If for a household the presence discovery model shows Solar=Yes or Solar=Maybe, this household is filtered out from Elec Heating.

[00122] For Utilityl , there is exactly 1 customer for which was detected EV L1 =Yes, but also Solar=Yes, and it was filtered out from the resulting dataset.

[00123] The number for Solar customers is shown only for reference, as it cannot quantify how many customers will be actually filtered out from EV L1 and Elec Heating.

[00124] Table 4 below shows one measure of improvement using embodiments of the invention, which is the percentage of filtered customers:

Table 4

[00125] As disclosed, embodiments detects the presence of a targeted device using two different trained machine learning models. The first model detects the presence of the targeted device and has been trained with a dataset generally lacking a first device. The second model detects the presence of the first device. The output of the first and second models are combined to provide a more accurate presence detection of the targeted device than what is provided by the first model alone.

[00126] The features, structures, or characteristics of the disclosure described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of “one embodiment,” “some embodiments,” “certain embodiment,” “certain embodiments,” or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “one embodiment,” “some embodiments,” “a certain embodiment,” “certain embodiments,” or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

[00127] One having ordinary skill in the art will readily understand that the embodiments as discussed above may be practiced with steps in a different order, and/or with elements in configurations that are different than those which are disclosed. Therefore, although this disclosure considers the outlined embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of this disclosure. In order to determine the metes and bounds of the disclosure, therefore, reference should be made to the appended claims.