Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI CAMERA LOAD ESTIMATION
Document Type and Number:
WIPO Patent Application WO/2016/139203
Kind Code:
A1
Abstract:
Imaging data captured by 3D depth cameras and thermal cameras can be combined to identify objects and determine whether they are human or non-human. The total weight of human and non-human objects can be estimated based upon volume analysis and reported to an elevator dispatch controller to allow for more efficient dispatch of elevator cars.

Inventors:
PALAZZOLA MICHAEL (US)
XU JIE (US)
ALLEN STEPHEN (US)
FELDHUSEN PETER (US)
DUDDE FRANK (US)
PARKER ALAN (US)
Application Number:
PCT/EP2016/054319
Publication Date:
September 09, 2016
Filing Date:
March 01, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THYSSENKRUPP ELEVATOR AG (DE)
International Classes:
B66B1/34
Foreign References:
US20060037818A12006-02-23
US20120326959A12012-12-27
Other References:
None
Attorney, Agent or Firm:
THYSSENKRUPP INTELLECTUAL PROPERTY GMBH (Essen, DE)
Download PDF:
Claims:
CLAIMS

1. A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the

thermal camera; and an elevator controller communicatively coupled with the image processor; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera,

wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; determine a classification for each object of the set of discrete objects based upon the set of image data; estimate a weight for each object of the set of discrete objects based upon the classification; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.

2. The system of claim 1 , wherein the elevator controller and the image processor share a processor and a memory.

3. The system of claim 1 or 2, further comprising a passenger waiting area, wherein the elevator car is configured to travel to the passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.

4. The system of claim 3, wherein the elevator controller is configured to: determine an additional occupancy weight based upon a maximum occupancy weight that is configured for the elevator car and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum

occupancy weight, and when the current occupancy weight indicates that the elevator car is empty, prioritize sending the elevator car to the passenger waiting area.

5. The system of claim 4, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.

6. The system of claim 1 or 2 wherein the first camera location and the second camera location are within the elevator car, and wherein the elevator controller is configured to: when the total weight estimate indicates that there are no passengers in the

elevator car, cancel any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, place the elevator car into a floor bypass mode.

7. The system according to any of the preceding claims, wherein the image processor is further configured to: map the set of thermal data to the set of spatial data to create a thermal spatial overlay; and for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.

8. The system of claim 7, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a thermal classification confidence score, wherein the thermal

classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial

classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determine a final classification confidence score based upon the thermal

classification confidence score and the spatial classification confidence score.

9. The system of claim 8, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a tertiary classification confidence score based upon one or more of: information from a sound detecting device indicating the presence of passengers; information from an RFID reader indicating the presence of passenger key cards; information from a motion sensing door counter indicating the number of passengers that entered the elevator car; and information from the elevator controller indicating the number of floors selected for disembarkation; and determine the final classification confidence score based upon the thermal

classification confidence score, the spatial classification confidence score, and the tertiary classification score.

10. The system of claim 7 or 8, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a human: determine a volume of the object based upon the subset of the set of spatial data associated with the object; determine a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reduce the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determine the weight based upon the volume of the object.

11. The system of claim 7, 8 or 9, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a non-human: determine a volume of the non-human object based upon the subset of the set of spatial data associated with the non-human object; determine if the non-human object is being carried by a human object based upon the subset of the set of spatial data associated with the non-human object and the subset of the set of thermal data associated with the non-human object; if the non-human object is being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a carried object weight calculation; and if the non- human object is not being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a heavy object weight calculation.

A method comprising the steps: at an image processor, receiving a set of image data from a depth camera at a first camera location and a thermal camera at a second camera location, wherein the set of image data comprises a set of thermal data and a set of spatial data; identifying a set of discrete objects based upon the set of image data; determining a classification for each object of the set of discrete objects based upon the set of image data; estimating a weight for each object of the set of discrete objects based upon the classification; and providing a total weight estimate to an elevator controller, the total weight

estimated based upon the weight for each object of the set of discrete objects, to the elevator controller; wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.

13. The method of claim 12, wherein the one or more elevator cars are configured to travel to a passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.

14. The method of claim 13, further comprising the steps: determining an additional occupancy weight based upon a maximum occupancy weight that is configured for an elevator car of the one or more elevator cars and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum

occupancy weight, sending the elevator car to the passenger waiting area.

15. The method of claim 14, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.

16. The method according to one of the claims 12-15, wherein the first camera location and the second camera location are within the elevator car, further comprising the steps: when the total weight estimate indicates that there are no passengers in the

elevator car, canceling any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, placing the elevator car into a floor bypass mode.

17. The method according to one of the claims 12-16, further comprising the steps: at the image processor, mapping the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identifying a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.

18. The method according to one of the claims 12-17, wherein the step of determining the classification for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determining a spatial classification confidence score, wherein the spatial

classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determining a final classification confidence score based upon the thermal

classification confidence score and the spatial classification confidence score.

19. The method according to one of the claims 12-18, wherein the step of estimating the weight for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a volume of the object based upon the subset of the set of spatial data associated with the object; determining a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determining the weight based upon the volume of the object. 20. A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein a depth camera field of view and a thermal camera field of view overlap; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera,

wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data; determine a thermal classification confidence score, wherein the thermal

classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial

classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; determine a final classification confidence score based upon the thermal

classification confidence score and the spatial classification confidence score; when the final classification confidence score indicates that the object is a human, determine a volume of the object based upon the subset of the set of spatial data associated with the object; determine a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reduce the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; determine the weight based upon the volume of the object; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.

Description:
MULTI CAMERA LOAD ESTIMATION

PRIORITY

[0001] This application claims priority to U.S. Provisional Patent Application

Serial No. 62/128,187, filed March 4, 2015, entitled "Multi Camera Load Estimation". The disclosure of which is incorporated by reference herein.

FIELD

[0002] The disclosed technology pertains to a system for estimating the weight of objects and passengers occupying or entering an elevator car based upon a combination of depth and thermal imaging.

BACKGROUND

[0003] Determining the weight of occupants in an elevator car is important for the efficient and safe operation of an elevator system. When load weight is known, elevator cars can be directed to offload passengers before accepting more passengers when operating at or near load weight limits. By ensuring that elevators are not overloaded, the safety and comfort of passengers can be protected and the longevity of the elevator system's mechanical components can be increased.

[0004] Load weight can be determined in a variety of ways. Strain gauges can be placed on the elevator car itself or on structures related to the elevator car in order to measure forces applied to the car by occupants. Strain gauges can be installed on ropes supporting the car in order to measure forces within the car. However, strain gauges need to be carefully calibrated and maintained in order to provide an accurate indication of load weight. Installation and maintenance of strain gauges can be difficult due to the lack of space in and around an elevator car within a hoistway and to the inconvenience of taking elevator cars offline in order to perform maintenance and installation. Even when correctly calibrated, strain gauges placed on structural portions of the elevator can provide inaccurate measurements when occupants within the elevator are in motion or unevenly distributed within the car. Similarly, strain gauges installed on a rope can provide inaccurate weight measurements due to rope vibration and sway.

What is needed, therefore, is an improved system for determining the weight of passengers and objects within an elevator car and their relative position in the elevator car.

BRIEF DESCRIPTION OF THE DRAWINGS

The drawings and detailed description that follow are intended to be merely illustrative and are not intended to limit the scope of the invention as contemplated by the inventors.

FIG. 1 is a flowchart of a set of high-level steps that a system could perform to determine load weight using cameras.

FIG. 2 is a front perspective view of an exemplary camera placement within an elevator car.

FIG. 3 is a flowchart of a set of steps that a system could perform to classify objects based upon depth imaging.

FIG. 4 is a flowchart of a set of steps that a system could perform to classify objects based upon thermal imaging.

FIG. 5 is a flowchart of a set of steps that a system could perform to determine a final classification for objects.

FIG. 6 is a flowchart of a set of steps that a system could perform to determine the weight of human objects.

FIG. 7 is a flowchart of a set of steps that a system could perform to determine the weight of non-human objects. [0014] FIG. 8 is a top-down perspective view illustrating a common field of view between two nearby cameras.

DETAILED DESCRIPTION

[0015] The inventors have conceived of novel technology that, for the purpose of illustration, is disclosed herein as applied in the context of elevator load weight determination and determination of placement or position in the elevator car. While the disclosed applications of the inventors' technology satisfy a long-felt but unmet need in the art of elevator load weight determination, it should be understood that the inventors' technology is not limited to being implemented in the precise manners set forth herein, but could be implemented in other manners without undue experimentation by those of ordinary skill in the art in light of this disclosure. Accordingly, the examples set forth herein should be understood as being illustrative only, and should not be treated as limiting.

[0016] Turning now to the figures, FIG. 1 shows a flowchart of a set of high-level steps that a system could perform to determine load weight using cameras. One or more cameras are installed and initialized (100) in or near an elevator car. Initialization could include, for example, the powering on and automatic or manual calibration of cameras to account for background noise such as light sources, reflective surfaces, or to account for the size and other characteristics of the space within which they are installed. FIG. 2 shows one example of a camera installation within an elevator car (200). In this embodiment, a thermal camera (202) is placed near the ceiling of an elevator car (200) so that it can capture a thermal image of the interior of the elevator car (200). The thermal camera (202) may be any device that can capture a representation of temperature variations of objects within its field of view, for example, a Grid-Eye Array Sensor, MLX90621, HTPA32x31, or similar device. A depth camera (204) is placed near the thermal camera (202) such that the cameras (202, 204) share a similar field of view of the interior of the elevator car (200). The depth camera (204) may be any device that can capture a depth field or 3D representation of objects within its field of view, for example, an Asus Xtion, Microsoft inect, PrimeSense Carmine, or similar device. The cameras (202, 204) may be communicatively coupled with processing devices, such as an elevator controller or image processor, via a local area network, wireless area network, or data cable so that acquired image data can be stored, analyzed, or manipulated by other components of the elevator system.

[0017] While the example shown in FIG. 2 shows the cameras (202, 204) placed within the elevator car, it is also possible that the disclosed technology could be implemented in a configuration in which cameras (202, 204) would be placed in a lobby area outside of an elevator hoistway and positioned so that their fields of view capture occupants waiting for an elevator car, or occupants who recently entered an elevator car. Alternatively, the disclosed technology could be implemented in configurations in which cameras (202, 204) would be placed in a space between the lobby and the elevator car, such as the hoistway door jamb. The placement of cameras (202, 204) is flexible and will vary by embodiment to fit the particular elevator system with which they are used. Additionally, while the example shown in FIG. 2 shows a single thermal camera (202) and a single depth camera (204), it is also possible that the disclosed technology could be implemented in a manner multiple cameras of each type, or multiple cameras of one type and a single camera of the other, could be used to gather data for use in load weight determinations.

[0018] Once the cameras (202, 204) are ready for use (100), the system may capture a depth image and perform a depth classification (102). The depth classification (102) may be performed by analyzing captured depth image data provided by the depth camera (204) in order to identify discrete objects within the elevator car and provide a provisional depth classification as to whether each identified object is a human or non-human object. Similarly, the thermal camera (202) may be used to capture thermal image data that can be used to provide a provisional thermal classification (104) as to whether each discrete object is a human or non-human object. The determination of the depth classification (102) and the thermal classification (104) can occur in parallel in some embodiments, such as those where the classification for each is self contained and not dependent upon the other. In other embodiments, such as that shown in FIG. 1 , the depth classification (102) may be used as a factor in determining the thermal classification (104). For example, where a number of discrete objects identified within a depth image could be used to identify discrete objects within a thermal image having the same or similar field of view. Similarly, the thermal classification (104) may be used as a factor in determining the depth classification. For example, if a thermal image indicates that an object with a humanlike heat signature is standing behind or near an object without a heat signature, this information could be used with a depth image to separate what initially appears to be a single object into two discrete objects. The depth classification (102) and thermal classification (104) can be combined, along with any other indicators, to produce a final classification (106) that identifies and classifies each discrete object within the elevator car as being human or non-human. The depth data, thermal data, and final classifications (106) can be used to allow the elevator controller, or another processing device, to calculate the weight of all identified objects classified as humans (108) and calculate the weight of all identified objects classified as non-human (1 10). Calculated weights (108, 110) may be used, for example, to prioritize dispatch (1 12) of elevator cars to optimize the safety, comfort, and efficiency of the elevator system.

Turning now to FIG. 3, that figure shows a flowchart of a set of steps that a system could perform to classify objects based upon depth imaging. In this embodiment, a depth image is acquired (300) from the depth camera (204) and made available to the image processor. The image processor uses object recognition software to identify the total number of separate and distinct objects (302) within the elevator car. Once separate objects have been identified (302), the collection of separate objects can be iterated through and classified until each has been classified (304). The collection of separate objects may be iterated through in the order that they were identified as separate objects, in reverse order, or in any other order that may best take advantage of a particular hardware configuration's capabilities. In order to classify an object, an image processor can apply object recognition software in order to classify (308) whether a distinct object is a human or not. Object recognition software would accept depth images as input and identify objects, such as human forms. Examples of object recognition software could include OpenCV, OpenNI, RealSense SDK, JavaFX, or similar software. In some embodiments the object recognition software will use particle filtering, a Bayesian numerical approximation method that can assist in human tracking and building articulated human models based upon depth imaging. Particle filtering has four basic steps: resampling from previous N particles, propagating to apply temporal dynamics, weighting by likelihood, and estimating for posterior. See Table 1 below for examples of a mathematical representation for the steps of particle filtering.

posterior probability, x represents the hidden states, Z represents the observable states, and k represents the time step.

If the object recognition software suggests that a particular object is human, a confidence score can be generated representing the likelihood that the particular object is human. If the object recognition software suggests that a particular object is non-human, a confidence score can be generated representing the likelihood that the particular object is non-human. As each object is classified by the object recognition software as being human or non-human, this information as well as its related confidence score are preserved in a data structure. In some embodiments, where an object cannot be classified as human or non-human by the object recognition software, it may be classified as unknown. Once all distinct objects have been classified (304), the identification, classification, and confidence scores can be stored (306) in a database, cache, memory, or other storage medium for further use by the image processor and elevator controller. In some embodiments, no separate storage (306) of data would take place other than what would inherently occur as part of the normal operation of the system.

Turning now to FIG. 4, that figure shows a flowchart of a set of steps that a system could perform to classify objects based upon thermal imaging. In this embodiment, a thermal image is acquired (400) from the thermal camera (202) and made available to the image processor. The image processor will retrieve depth data from the depth camera (204) having a similar field of view and captured at a similar time frame as the recently acquired thermal image (400) and map the thermal image to the depth image (402). By mapping the thermal image (400) to the depth image (402), the image processor can determine the thermal data for a distinct object identified by the depth camera (204). In an alternative embodiment, the depth image can be mapped to the thermal image so that the image processor can determine the depth data for a distinct object identified by the thermal camera (202). For example, if a depth image indicates that a distinct object is located at a particular location (e.g., within a particular cell or cells used by a grid eye infrared array sensor to subdivide its field of view) within the elevator car, the same distinct object will be located in a similar position within the thermal image, since the thermal camera (202) is situated in the elevator car such that it shares a common field of view with the depth camera (204). In some embodiments, where it may be desirable to install the thermal camera (202) and depth camera (204) in a way that does not result in a common field of view, mapping the thermal image to the depth image (402) may be performed using some preparatory image transformation.

[0022] For example, FIG. 8 shows a top-down perspective view of the field of view of two cameras. In FIG. 8, the cameras (800, 802) are installed in such a way that they are pointed in the same direction but have a static distance between the mid-point of each lens. Due to the offset, the cameras (800, 802) each have a unique field of view (804, 806) as well as a common field of view (808). The resultant images of each camera (800, 802) could be mapped to each other by selecting the portion of each image that represents a common field of view (808), and discarding the remainder of each image (804, 806). This image transformation could be manually configured at the time of installation or could be automatically configured by comparison of background features of an empty elevator car between the two images. Other image transformations might be where cameras are installed on opposite sides of an elevator car and would require an image to be mirrored in order to arrive at a comparable field of view, or where one camera might be placed closer to the target field of view than the other and would require the far image to be zoomed and cropped in order to arrive at a comparable field of view.

[0023] Once the thermal image is mapped to the depth image (402) so that distinct objects can be identified within the thermal image, any objects that have not been classified based upon the thermal image (404) can be examined. If the thermal image data indicates that an object is human, it can be thermal classified (408) as being a human and a confidence score can be generated representing the likelihood that the object is human. Thermal data may indicate that an object is human in a variety of ways, for example, an object having a temperature between 90-100 degrees Fahrenheit, an object showing thermal patterns that suggest a torso, arms, legs, and head, or an object having higher temperatures close to a center, such as a torso, and lower temperatures at extremities, such as arms and legs, could all serve as indicators of a human object and can also be used as factors when calculating a thermal confidence score. If the thermal data classifies (408) an object as non-human, a confidence score can be generated representing the likelihood that the object is non-human.

Thermal data may indicate that an object is non-human in a variety of ways, for example, an object having the same temperature as the elevator car floor or wall, an object having a temperature below 90 degrees Fahrenheit, and an object having a steady temperature across its entire mass could all serve as indicators of a non- human object and can also be used as factors when calculating a thermal confidence score. These factors could be used in calculating a thermal confidence score by combining a weighted value from each factor to arrive at a probability indicator. For example, an object having a temperature of 98 degrees Fahrenheit could be valued at a 95% confidence in calculating a thermal confidence score indicating a human, while variations above or below 98 degrees Fahrenheit could gradually decrease the confidence rating such that a temperature of either 30 or 140 degrees Fahrenheit could have a 0% confidence that an object is a human. In some embodiments, where an object cannot be classified as human or non-human based upon the thermal data, it may be classified as unknown. Once all distinct objects have received a thermal classification (404), the thermal classifications and confidence scores can be stored (406) in a database, cache, memory, or other storage medium as part of the object data structure for further use by the image processor and elevator controller. In some embodiments, no separate storage (406) of data would take place other than what would inherently occur as part of the normal operation of the system. [0025] Turning now to FIG. 5, that figure shows a flowchart of a set of steps that a system could perform to determine a final classification for objects. In this embodiment, if there are objects that have not received a final classification (500), the system will check to see if there is a depth confidence score related to the object (502). If a depth confidence score is available (502), the depth confidence score will be selected and retrieved for use (504) as a factor suggesting that the object is human or non-human and the system will proceed to check for a thermal confidence score (506). If no depth confidence score is available, the system will proceed to check for a thermal confidence score (506). If a thermal confidence score is available (506), the thermal confidence score will be selected and retrieved for use (508) as a factor suggesting that the object is human or non-human and the system will proceed to check for any other factors influencing confidence (510). If no thermal confidence score is available, the system will proceed to check for other factors influencing confidence (510). If other factors influencing confidence are available (510), the other factors will be selected and retrieved for use (512) as factors suggesting that the object is human or non-human, and a final classification of the object will be determined (514).

[0026] If no other factors influencing confidence are available (510), a final classification of the object will be determined (514). In some embodiments, where no final classification can be determined for an object due to the object being classified as unknown by one or both of the thermal classification and the depth classification, or where confidence scores from various classifications offset and result in an indeterminable final classification, a final classification of unknown may be assigned. In such an embodiment, an object of unknown classification could be assigned a configurable default classification. For example, if a non-human object weighs more than a human object of the same size, a default classification of non-human could be configured so that total loads will be overestimated rather than underestimated. Alternately, an unknown classification could cause an object to have its weight calculated as both a human object and a non-human object, and a final weight determined by an average of the two. Once all objects have been classified (500), the final classifications can be stored (516) as part of the object data structure to a database, cache, memory or other storage medium and made available for further use by the image processor and elevator controller. In some embodiments, no separate storage ( 16) of data would take place other than what would inherently occur as part of the normal operation of the system.

[0027] Other factors that could influence confidence scores beyond thermal imaging and depth imaging could include, for example, a sound detecting device that can detect human breathing or heart rate, an RFID reader that determines the total number of humans in the elevator by scanning elevator access cards, a motion sensing door counter that counts the number of humans entering an elevator as they enter, an elevator controller which reports the number of floors selected for disembarkation, or other devices or data which could provide an indication of the number of humans on an elevator car. These factors could influence confidence scores by, for example, lowering the confidence scores of all provisional human classifications, which could result in a low confidence human classification becoming a non-human (or undecided) classification, in a scenario where provisional classifications indicate the presence of a number of humans exceeding that reported by an RFID scanner or door counter.

[0028] Determining a final classification ( 14) from one or more confidence scores may be performed in a variety of ways. In some embodiments, each confidence score could be equally weighted and combined or compared in order to classify an object. For example, if an object's depth confidence score indicated 50% confidence that it was non-human, a thermal confidence score indicated 55% confidence that it was human, and no other information indicating whether that object was human or non-human was available, the object's final classification could be determined (514) as human based on the higher thermal classification. In another embodiment, a weighted combination of thermal confidence and depth confidence could be configured if the confidence from one device is valued above another. For example, if a depth confidence score indicated 50% confidence that an object is non human, and a thermal confidence indicated a 30% confidence that an object was human, the thermal confidence might be considered twice as valuable as the depth confidence due to the accuracy and simplicity of the results of the thermal image, meaning that in a final comparison the thermal confidence would be weighted to 60% confidence that the object was human, and resulting in a final classification (514) that the object is human.

[0029] In another embodiment, a third factor, such as data derived from a door counter that suggests a maximum number of occupants, could be used in combination with thermal and/or depth confidence to determine a final classification (514). For example, objects could be considered as a group rather than in isolation, and if the total number of objects classified as human exceeds the number of occupants indicated by the door counter, the human confidence scores could be weighted lower to reflect a loss of confidence that they are accurate based upon the door counter data. In some embodiments, a third factor could be provided by one or more of the thermal camera (202) and depth camera (204). For example, while a thermal image and depth image might initially classify an object as being human, a depth image might also indicate that the object is of a height and shape that may instead indicate that the object is instead a service animal. In such a scenario, this factor could influence the confidence score such that the object can be more accurately classified as a non-human object. Other methods for determining a final classification based upon one or more confidence factors will be apparent in light of this disclosure.

[0030] Turning now to FIG. 6, that figure shows a flowchart of a set of steps that a system could perform to determine the weight of human objects. In this embodiment, when there are human objects to weigh (600), the depth image for an object could be examined in order to determine the total volume (602) of the object. The thermal image could be compared to the depth image (604) to determine if any reduction of volume may be necessary. For example, if the comparison of the total volume of a human to the thermal volume of the human indicates that a bulky garment such as a heavy coat, raincoat, or other garment that might make the overall volume of a human to appear larger (606) than it actually is, the calculated volume (602) of the human could be reduced (608) by a factor to account for the difference in volume added by the garment. Similarly, if the comparison of the total volume to the thermal volume indicates that a non- human load is being carried by a human, such as a backpack, courier bag, or some other object carried close to the body that might appear to a depth camera to be part of the same object (610), the calculated volume (602) of the human could be reduced (612) by a factor to account for the difference in volume added by the carried object. Carried objects could have a weight determination made as part of the non-human object weight calculation steps shown in FIG. 7 and described below, or could be assigned a static weight to be added to the carriers calculated weight that is representative of the average weight of carried bags. Once an accurate volume is determined, the weight of the human can be calculated based upon volume (614). Once all human objects have been weighed (600), the weight of the human objects could be totaled and stored in a database, memory, cache, or other storage medium (616).

Determining weight from volume (614) could be performed in a variety of ways. In some embodiments, a static unit of mass per unit of volume could be provided based upon testing or available data in order to estimate weight. For example, the system could be configured to calculate each cubic centimeter of human volume to weigh 1 gram, such that a human body with total volume of 60,000 cubic centimeters, at 1 gram per cubic centimeter, would be calculated to weigh 60 kilograms, or about 132 pounds. In some embodiments, different values for unit of mass per unit of volume could be provided for different areas of the human body, which could result in more accurate final measurements. In such an embodiment, one cubic centimeter of leg volume could, for example, be calculated as 1.1 grams, since the legs are likely to contain more high density muscle and bone as compared to the arms or torso. The weight for each limb could be separately calculated and added to determine the total weight from volume (614). In other embodiments, a multiple linear regression model for soft biometric estimation could be used. In such an embodiment, a head, torso, leg, and arm volume model could be built based upon depth imaging. Outliers could be filtered out by using a moving median or random sample consensus ("RANSAC") on the point clouds. Length and circumference of head, torso, legs, and arms could be determined from the volume model and used in an equation to determine weight. See Table 2 below for an example of such an equation. Other methods of calculating body weight based upon the depth image will be apparent in light of this disclosure.

weight = -122.27

+ 0.48*(overall height)

- 0.17*(upper leg length)

+ 0.52*(calf circumference)

+ 0.16*(upper arm length)

+ 0.77* (upper arm circumference)

+ 0.49* (waist circumference)

+ 0.58*(upper leg circumference)

Table 2: Example equation for determining body weight [0032] Turning now to FIG. 7, that figure shows a flowchart of a set of steps that a system could perform to determine the weight of non-human objects. If there are non-human objects to weigh (700), the volume of an object is calculated (702) based upon the depth image and using the object recognition software. After volume has been calculated (702), a weight multiplier can be determined for the object that can be used in order to calculate the weight of the object. A weight multiplier may be determined based upon information known about the objects, such as whether it is being carried or suspended, whether it is resting on the floor or on a cart, or other factors. For example, depth information for an object could be examined to determine if the object is being carried or suspended above the ground (704). If the object is suspended above the ground by some means, a mass multiplier could be selected (706) representing the likely mass per volume characteristics of carried objects. In this manner, a bag being carried by an occupant could be assigned a low mass per volume, since it is unlikely that an extremely heavy object would be suspended above the ground by an occupant. Alternately, if an object is placed on the ground (708), it may indicate that the object is on a cart, dolly, or other wheeled device, or is too heavy to easily suspend above the ground. In such a case, a ground weight multiplier could be selected (710) for the object, giving it a fairly high mass per volume characteristic.

[0033] If it can't be determined that an object falls into a type that has a special weight multiplier, a standard weight multiplier could be selected (712), giving the object a moderate mass per volume characteristic representative of the average mass per volume of objects likely to be taken on elevators, such as books, papers, computers, liquids, foods, or other objects, which may vary depending upon the particular location and intended use for an elevator car. Once the mass per volume multiplier is determined, the weight of the object can be determined (714) by using the volume of the object, based upon the depth image and provided by the object recognition software, and the selected mass per volume multiplier for the object. When there are no remaining non-human objects to weigh (700), the non-human weights can be stored to a database, memory, cache, or another storage medium (716).

[0034] In some embodiments, data from depth (102), thermal (104) and final (106) classification may be preserved and integrated into future classifications so that the classification process may be adaptively improved over time. Such adaptive improvements could be implemented, for example, by way of an artificial intelligence structure, such as a neural network, by way of a data structure, such as an object comparison and lookup tree, or through similar means. A neural network adaptive classification could track a plurality of inputs and outputs from the classification process and organize them so that future output data can be more efficiently generated by examining future input data and analyzing it based upon similarities to historical input data. A data structure adaptive classification could store a plurality of input data in a manner that allows for rapid lookup of its resultant classification, which could be used to quickly classify objects in the case of an exact match, or which could be used as an additional confidence factor during classification. The exact implementation of adaptive classification may depend upon the desired result, as some implementations may result in increased speed of classification, while other may result in increased accuracy of classification. Such variations in implementation will be apparent in light of this disclosure.

[0035] The combined weight of the human and non-human objects (108, 110) can be determined and communicated to the elevator dispatch controller (1 12) for appropriate action. Actions taken based upon reported weight may vary by embodiment. In some embodiments, an elevator car that is operating near its max load weight could be placed into a floor bypass mode which ignores further floor calls until the load weight is reduced. In some embodiments, an elevator car that is operating at a low load weight could be prioritized to answer floor calls. In some embodiments, where depth and thermal cameras (202, 204) are placed in a lobby outside an elevator hoistway, and it is determined that the load weight of individuals waiting at a floor stop is near the maximum load weight of an elevator car, an empty elevator car could be prioritized to address that floor call. In some embodiments, where it is determined that the maximum load weight of passengers waiting at a floor stop is less than the available load weight for a partially loaded elevator car, the partially loaded elevator car could be prioritized for dispatch to that floor stop. In some embodiments, an elevator car that is determined to be empty based upon thermal and depth imaging could have all current floor stops canceled, to prevent unnecessary floor stops. Other variations on actions taken by an elevator controller or dispatch controller will be apparent in light of this disclosure.

[0036] The following embodiments relate to various non-exhaustive ways in which the teachings herein may be combined or applied. It should be understood that the following embodiments are not intended to restrict the coverage of any claims that may be presented at any time in this document or in subsequent filings based on this document. No disclaimer is intended. The following embodiments are being provided for nothing more than merely illustrative purposes. It is contemplated that the various teachings herein may be arranged and applied in numerous other ways. It is also contemplated that some embodiments may omit certain features referred to in the below embodiments. Therefore, none of the aspects or features referred to below should be deemed critical unless otherwise explicitly indicated as such at a later date by the inventors or by a successor in interest to the inventors. If any claims are presented in this document or in subsequent filings related to this document that include additional features beyond those referred to below, those additional features shall not be presumed to have been added for any reason relating to patentability.

[0037] Embodiment 1 A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera, wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; determine a classification for each object of the set of discrete objects based upon the set of image data; estimate a weight for each object of the set of discrete objects based upon the classification; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.

Embodiment 2

The system of Embodiment 1, wherein the elevator controller and the image processor share a processor and a memory.

Embodiment 3

The system of any of Embodiments 1-2, further comprising a passenger waiting area, wherein the elevator car is configured to travel to the passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.

Embodiment 4

The system of any of Embodiment 3, wherein the elevator controller is configured to: determine an additional occupancy weight based upon a maximum occupancy weight that is configured for the elevator car and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum occupancy weight, and when the current occupancy weight indicates that the elevator car is empty, prioritize sending the elevator car to the passenger waiting area.

Embodiment 5

The system of Embodiment 4, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.

Embodiment 6

The system of Embodiment 1, wherein the first camera location and the second camera location are within the elevator car, and wherein the elevator controller is configured to: when the total weight estimate indicates that there are no passengers in the elevator car, cancel any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, place the elevator car into a floor bypass mode.

Embodiment 7

The system of any of Embodiments 1-6, wherein the image processor is further configured to: map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.

Embodiment 8

The system of Embodiment 7, The system of claim 7, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determine a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score.

[0053] Embodiment 9

[0054] The system of Embodiment 8, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a tertiary classification confidence score based upon one or more of: information from a sound detecting device indicating the presence of passengers; information from an RFID reader indicating the presence of passenger key cards; information from a motion sensing door counter indicating the number of passengers that entered the elevator car; and information from the elevator controller indicating the number of floors selected for disembarkation; and determine the final classification confidence score based upon the thermal classification confidence score, the spatial classification confidence score, and the tertiary classification score.

[0055] Embodiment 10

[0056] The system of any of Embodiments 7-9, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a human: determine a volume of the object based upon the subset of the set of spatial data associated with the object; determme a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determine the weight based upon the volume of the object.

[0057] Embodiment 1 1

[0058] The system of any of Embodiments 7-10, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a non-human: determine a volume of the non- human object based upon the subset of the set of spatial data associated with the non-human object; determine if the non-human object is being carried by a human object based upon the subset of the set of spatial data associated with the non-human object and the subset of the set of thermal data associated with the non-human object; if the non-human object is being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a carried object weight calculation; and if the non-human object is not being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a heavy object weight calculation.

[0059] Embodiment 12

[0060] A method comprising the steps: at an image processor, receiving a set of image data from a depth camera at a first camera location and a thermal camera at a second camera location, wherein the set of image data comprises a set of thermal data and a set of spatial data; identifying a set of discrete objects based upon the set of image data; determining a classification for each object of the set of discrete objects based upon the set of image data; estimating a weight for each object of the set of discrete objects based upon the classification; and providing a total weight estimate to an elevator controller, the total weight estimated based upon the weight for each object of the set of discrete objects, to the elevator controller; wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.

Embodiment 13

The method of Embodiment 12, wherein the one or more elevator cars are configured to travel to a passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.

Embodiment 14

The method of Embodiment 13, further comprising the steps: determining an additional occupancy weight based upon a maximum occupancy weight that is configured for an elevator car of the one or more elevator cars and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum occupancy weight, prioritize sending the elevator car to the passenger waiting area.

Embodiment 15

The method of Embodiment 14, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.

Embodiment 16 The method of Embodiment 12, wherein the first camera location and the second camera location are within the elevator car, further comprising the steps: when the total weight estimate indicates that there are no passengers in the elevator car, canceling any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, placing the elevator car into a floor bypass mode.

Embodiment 17

The method of any of Embodiments 12-16, further comprising the steps: at the image processor, mapping the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identifying a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.

Embodiment 18

The method of Embodiment 17, wherein the step of determining the classification for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determining a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determining a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score.

Embodiment 19 [0074] The method of any of Embodiments 17-18, wherein the step of estimating the weight for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a volume of the object based upon the subset of the set of spatial data associated with the object; determining a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determining the weight based upon the volume of the object.

[0075] Embodiment 20

[0076] A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein a depth camera field of view and a thermal camera field of view overlap; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera, wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data; determine a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; determine a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score; when the final classification confidence score indicates that the object is a human, determine a volume of the object based upon the subset of the set of spatial data associated with the object; determine a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; determine the weight based upon the volume of the object; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.

Further variations on, features for, and applications of the inventor's technology will be apparent to, and could be practiced without undue experimentation by, those of ordinary skill in the art in light of this disclosure. Accordingly, the protection accorded by this document, or by any related document, should not be limited to the material explicitly disclosed herein. Accordingly, we claim: