Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR IMAGE BASED OCCUPANCY DETECTION
Document Type and Number:
WIPO Patent Application WO/2020/040890
Kind Code:
A1
Abstract:
An image sensor includes an active pixel array including a number of pixels and image sensor control circuitry configured to perform a read operation only on a subset of the pixels of the active pixel array such that pixels not in the subset remain inactive. By reading out only the subset of pixels in the active pixel array and keeping the remaining pixels inactive, the temperature of the active pixel array may be reduced compared to a conventional read out process, thereby reducing thermal noise in the resulting pixel data.

Inventors:
ROBERTS JOHN (US)
BESSEMS RONALD (US)
BOWSER ROBERT (US)
Application Number:
PCT/US2019/040713
Publication Date:
February 27, 2020
Filing Date:
July 05, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
IDEAL IND LIGHTING LLC (US)
International Classes:
H04N5/345; F21V23/04; H04N5/232; H04N5/353; H04N5/378
Foreign References:
JP2015186155A2015-10-22
US20170135179A12017-05-11
JP2015061109A2015-03-30
US201615191753A2016-06-24
US201815887096A2018-02-02
US201715681941A2017-08-21
Attorney, Agent or Firm:
WITHROW, Benjamin, S. (US)
Download PDF:
Claims:
Claims

What is claimed is:

1. An image sensor comprising:

• an active pixel array comprising a plurality of pixels; and

• image sensor control circuitry configured to perform a read operation only on a subset of the plurality of pixels such that the plurality of pixels that are not in the subset of the plurality of pixels remain inactive.

2. The image sensor of claim 1 wherein the subset of the plurality of pixels comprises a first region of interest and a second region of interest that is noncontiguous with the first region of interest.

3. The image sensor of claim 2 wherein the image sensor is integrated into an intelligent lighting fixture comprising a solid-state light source and driver circuitry, wherein the driver circuitry is configured to:

• control one or more light output parameters of the solid-state light source;

• obtain pixel data for the subset of the plurality of pixels from the image sensor; and

• analyze the pixel data to determine if a person has entered a field of view of the image sensor.

4. The image sensor of claim 1 wherein the subset of the plurality of pixels is defined by a polygon having at least 5 sides.

5. The image sensor of claim 4 wherein the image sensor is integrated into an intelligent lighting fixture comprising a solid-state light source and driver circuitry, wherein the driver circuitry is configured to:

• control one or more light output parameters of the solid-state light source;

• obtain pixel data for the subset of the plurality of pixels from the image sensor; and • analyze the pixel data to determine if a person has entered a field of view of the image sensor.

6. The image sensor of claim 1 wherein the subset of the plurality of pixels is defined by a rectangular area along one or more outside edges of the active pixel array.

7. The image sensor of claim 6 wherein the subset of the plurality of pixels forms a frame around the one or more outside edges of the active pixel array such that the plurality of pixels that are not in the subset of the plurality of pixels form a rectangle that is inset within the subset of the plurality of pixels.

8. The image sensor of claim 1 wherein each one of the plurality of pixels comprises:

• a light detecting element configured to transform light into an analog signal; and

• supporting circuitry configured to:

• during a read operation, process the analog signal from the light detecting element to provide a processed analog signal and provide the processed analog signal to downstream circuitry in the image sensor; and

• remain inactive when a read operation is not occurring.

9. The image sensor of claim 1 wherein each one of the plurality of pixels generates more heat during a read operation than when a read operation is not occurring.

10. The image sensor of claim 1 wherein each one of the plurality of pixels consumes more power during a read operation than when a read operation is not occurring.

1 1. The image sensor of claim 1 wherein the image sensor control circuitry is configured to:

• operate in a first mode of operation wherein the image sensor control circuitry is configured to perform the read operation on only the subset of the plurality of pixels such that the plurality of pixels that are not in the subset of the plurality of pixels remain inactive; and

• operate in a second mode of operation wherein the image sensor control circuitry is configured to perform a read operation on all of the plurality of pixels.

12. The image sensor of claim 11 wherein noise within the subset of the plurality of pixels is lower in the first mode of operation than in the second mode of operation.

13. The image sensor of claim 1 wherein the image sensor control circuitry is configured to capture and store pixel data from the subset of the plurality of pixels in a sparse data structure such that only the pixel data from the subset of the plurality of pixels is included in the sparse data structure.

14. The image sensor of claim 13 wherein the image sensor control circuitry is configured to facilitate a transfer of the spare data structure to a remote device.

15. A method for detecting occupancy from an image sensor comprising:

• obtaining pixel data from the image sensor, wherein the pixel data includes pixel values for a subset of pixels in an active pixel array of the image sensor; and

• analyzing the pixel data to determine if a person has entered a field of view of the image sensor.

16. The method of claim 15 wherein obtaining the pixel data from the image sensor comprises performing a read operation only on the subset of pixels such that pixels in the active pixel array that are not in the subset of pixels remain inactive.

17. The method of claim 16 wherein each one of the pixels generates more heat during a read operation than when a read operation is not occurring.

18. The method of claim 16 wherein each one of the pixels consumes more power during a read operation than when a read operation is not occurring.

19. The method of claim 15 wherein the subset of pixels is defined by a rectangular area along one or more outside edges of the active pixel array.

20. The method of claim 19 wherein the subset of pixels in the pixel array forms a frame around the one or more outside edges of the pixel array such that the pixels in the active pixel array that are not in the subset of pixels form a rectangle that is inset within the subset of pixels.

21. The method of claim 15 wherein:

• the field of view of the image sensor includes one of an ingress point and an egress point to a space in which the image sensor is located; and

• the subset of pixels is located in the active pixel array such that a

person entering or leaving the space via one of the ingress point and egress point will be detected by the subset of pixels.

22. The method of claim 15 further comprising, upon determining that a person has entered the field of view of the image sensor, obtaining additional pixel data from the image sensor, wherein the additional pixel data includes pixel values for all of the pixels in the active pixel array of the image sensor.

23. The method of claim 22 further comprising analyzing the additional pixel data to verify that the area within the field of view of the image sensor is occupied. 24. The method of claim 22 further comprising analyzing the additional pixel data to determine if the area within the field of view of the image sensor remains occupied.

Description:
METHOD AND SYSTEM FOR IMAGE BASED OCCUPANCY DETECTION

Field of the Disclosure

[0001] The present disclosure relates to methods and systems for detecting occupancy using images. In particular, the present disclosure relates to methods and systems for detecting occupancy that increase the efficiency and accuracy of occupancy detection via an image sensor.

Background

[0002] Modern lighting fixtures often include additional features above and beyond their ability to provide light. For example, many lighting fixtures now include communications circuitry for sending and receiving commands to and from other devices, control circuitry for setting the light output thereof, and sensor circuitry for measuring one or more environmental parameters. Recently, lighting fixtures have begun to incorporate image sensors. Image sensors in lighting fixtures are generally expected to detect occupancy (i.e., the presence of a person) in the area within the field of view of the image sensor. While there are several well-known methods for determining occupancy using an image sensor, these methods are complex and computationally expensive. As a result, lighting fixtures utilizing an image sensor to detect occupancy must include relatively powerful processing circuitry, which consumes additional power and drives up the cost of the lighting fixture. Accordingly, there is a need for systems and methods for detecting occupancy using an image sensor with reduced complexity and computational expense.

[0003] In one embodiment, an image sensor includes an active pixel array including a number of pixels and image sensor control circuitry configured to perform a read operation only on a subset of the pixels of the active pixel array such that pixels not in the subset remain inactive. By reading out only the subset of pixels in the active pixel array and keeping the remaining pixels inactive, the temperature of the active pixel array may be reduced compared to a conventional read out process, thereby reducing thermal noise in the resulting pixel data.

[0004] In one embodiment, a method for detecting occupancy from an image sensor includes obtaining pixel data from the image sensor and analyzing the pixel data to determine if a person has entered the field of view of the image sensor. Notably, the pixel data includes pixel values only for a subset of pixels in an active pixel array of the image sensor. By obtaining and analyzing pixel data only for a subset of pixels, the computational expense of determining if a person has entered the field of view of the image sensor may be significantly reduced.

[0005] Those skilled in the art will appreciate the scope of the present disclosure and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.

Brief Description of the Drawing Figures

[0006] The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.

[0007] Figure 1 illustrates an image sensor according to one embodiment of the present disclosure.

[0008] Figure 2 illustrates a pixel of an active pixel array according to one embodiment of the present disclosure.

[0009] Figure 3 is a flow diagram illustrating a method for detecting

occupancy using an image sensor according to one embodiment of the present disclosure.

[0010] Figure 4 illustrates a read out pattern for an active pixel array

according to one embodiment of the present disclosure.

[0011] Figure 5 is a flow diagram illustrating a method for detecting

occupancy using an image sensor according to one embodiment of the present disclosure. [0012] Figures 6A through 6C illustrate read out patterns for an active pixel array according to various embodiments of the present disclosure.

[0013] Figure 7 illustrates an intelligent lighting fixture according to one embodiment of the present disclosure.

[0014] Figure 8 illustrates an intelligent lighting network according to one embodiment of the present disclosure.

Detailed Description

[0015] The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.

[0016] It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

[0017] It will be understood that when an element such as a layer, region, or substrate is referred to as being "on" or extending "onto" another element, it can be directly on or extend directly onto the other element or intervening elements may also be present. In contrast, when an element is referred to as being "directly on" or extending "directly onto" another element, there are no intervening elements present. Likewise, it will be understood that when an element such as a layer, region, or substrate is referred to as being "over" or extending "over" another element, it can be directly over or extend directly over the other element or intervening elements may also be present. In contrast, when an element is referred to as being "directly over" or extending "directly over" another element, there are no intervening elements present. It will also be understood that when an element is referred to as being "connected" or

"coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present.

[0018] Relative terms such as "below" or "above" or "upper" or "lower" or "horizontal" or "vertical" may be used herein to describe a relationship of one element, layer, or region to another element, layer, or region as illustrated in the Figures. It will be understood that these terms and those discussed above are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures.

[0019] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or

"including" when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[0020] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. [0021] Figure 1 shows an image sensor 10 according to one embodiment of the present disclosure. The image sensor 10 includes an active pixel array 12, control circuitry 14, a pixel selection circuitry 16, sampling circuitry 18, analog-to- digital converter circuitry 20, an output register 22, and an output 24. The control circuitry 14 is coupled to each one of the pixel selection circuitry 16, the sampling circuitry 18, the analog-to-digital converter circuitry 20, and the output register 22. The pixel selection circuitry 16 is coupled to the active pixel array 12. The sampling circuitry 18 is coupled between the active pixel array 12 and the analog-to-digital converter circuitry 20. The analog-to-digital converter circuitry 20 is coupled to the output register 22, which is in turn coupled to the output 24.

[0022] In operation, the control circuitry 14 provides control signals to each one of the pixel selection circuitry 16, the sampling circuitry 18, the analog-to- digital circuitry 20, and the output register 22 to facilitate capturing an image frame and providing a digitized version thereof at the output 24 of the image sensor 10. The pixel selection circuitry 16 selects one or more pixels in the active pixel array 12 to be reset and/or read out. In a conventional rolling shutter read process, the pixel selection circuitry 16 serially selects rows of pixels in the active pixel array 12 to be reset and subsequently read out one after the other. Selected pixels provide analog signals proportional to an amount of light detected thereby to the sampling circuitry 18. The analog-to-digital converter circuitry 20 digitizes the analog signals from the sampling circuitry 18 into pixel data and provides the pixel data to the output register 22, where it can be retrieved via the output 24.

[0023] Figure 2 shows details of a pixel 26 in the active pixel array 12 according to one embodiment of the present disclosure. The pixel 26 includes a light detecting element 28 and support circuitry 30. The light detecting element 28 may be a photodiode, photogate, or the like. The support circuitry 30 generally includes one or more switching devices such as transistors that reset and facilitate read out of the pixel 26. One or more select signals provided to a select signal input 32 (from the pixel selection circuitry 16) initiate reset and read out the pixel 26. During a read operation, analog signals indicative of the amount of light detected by the pixel 26 are provided to a column bus output 34, which is coupled to the sampling circuitry 18.

[0024] For purposes of discussion herein, the pixel 26 generally operates in one of three states: idle, reset, and read out. In an idle state, photons that collide with the light detecting element 28 dislodge electrons that accumulate in a potential well of the light detecting element 28. The number of electrons that accumulate in the potential well of the light detecting element 28 is proportional to the number of photons that contact the light detecting element 28. In the idle state, the components of the support circuitry 30 remain off. Accordingly, the pixel 26 consumes minimal if any power and dissipates little if any heat in the idle state. The idle state is also referred to as an inactive state herein. During a reset operation, one or more reset switching components (e.g., transistors) in the support circuitry 30 flush out the electrons accumulated in the potential well of the light detecting element 28. Some power is consumed by the one or more reset switching components and thus some heat is generated by the pixel 26 during the reset operation. During a read operation, one or more read out switching elements in the support circuitry are turned on to process and transfer the charge stored in the potential well of the light detecting element 28 (e.g., as a voltage or current) to the column bus output 34. Some power is consumed by the one or more read out switching components and thus some heat is generated by the pixel 26 during the read operation.

[0025] Conventionally, all of the pixels in the active pixel array 12 are read out to provide a single image frame from the image sensor 10. Generally, this is done as part of a rolling shutter readout, wherein every pixel in a row of pixels is reset, allowed to remain in an idle state for some amount of time (i.e., the integration time), then read out. This process repeats for each row of pixels until all of the pixels in the active pixel array 12 have been reset and subsequently read out. As the number of rows in an active pixel array 12 increases, the time to capture and read out a single image frame also increases. This may limit the number of image frames that can be captured in a given period of time, known as the frame rate of the image sensor. A limited frame rate may be problematic in some applications. Additionally, the resulting digitized image frame including pixel data for all of the pixels in the active pixel array 12 may be quite large. This may result in increased transfer time of the digitized image frame between the image sensor 10 and external processing circuitry (not shown), as such a transfer is often performed serially. Further, this may result in increased analysis time of the digitized image frame by said external processing circuitry, for example, to detect occupancy in the image frame or a set of image frames.

Finally, as discussed above, every reset and read out of a pixel in the active pixel array 12 consumes power and dissipates heat. Over time, continually resetting and reading out every pixel in the active pixel array 12 may raise the temperature of the active pixel array 12. As the temperature of the active pixel array 12 increases, the signal to noise ratio of each one of the pixels therein decreases due to thermal noise. This may make it difficult to analyze the resulting image frame or set of image frames, for example, to detect occupancy.

[0026] The inventors of the present disclosure discovered that it is highly inefficient and unnecessary to analyze the entirety of an image frame or set of image frames to detect a transition from an unoccupied state to an occupied state. This is because persons entering the field of view of an image sensor necessarily must first pass through one or more areas within the field of view before being present in other parts of the field of view. For example, for an image sensor in the middle of a room, a person must necessarily pass through an outer edge of the field of view before being present in the center of the field of view. As another example, for an image sensor located in a hallway where the field of view includes the entirety of the area between the two enclosing walls of the hallway, a person must necessarily pass through either the top or the bottom of the field of view before being present in the center of the field of view. As yet another example, for an image sensor with a field of view including the only door to a room and it is known that the room is empty (e.g., due to the absence of occupancy for a given period of time), a person must necessarily pass through the area of the field of view near the door before being present in any other part of the field of view. [0027] Accordingly, Figure 3 is a flow diagram illustrating a method for detecting occupancy using an image sensor according to one embodiment of the present disclosure. The method starts by obtaining pixel data for a subset of pixels in an active pixel array of an image sensor such that the pixels not in the subset remain inactive (step 100). As discussed herein, when a pixel is inactive or idle, the supporting circuitry therein is off and thus the pixel is consuming minimal if any power and producing minimal if any heat. Accordingly, obtaining the pixel data from the subset of pixels in the active pixel array involves reading out only the subset of pixels while allowing the remaining pixels to remain inactive. Next, the pixel data is analyzed to determine if a person has entered the field of view of the image sensor (step 102). Details regarding analyzing the pixel data to determine occupancy therefrom can be found in U.S. Patent Application Numbers 15/191 ,753, 15/887,096, and 15/681 ,941 , the contents of which are hereby incorporated by reference in their entirety. It is then

determined if a person has entered the field of view (step 104). If a person has entered the field of view, additional pixel data may optionally be obtained (step 106), where the additional pixel data contains pixel data for a larger portion of pixels in the active pixel array than the subset of pixels. For example, the additional pixel data may contain pixel data for all of the pixels in the active pixel array. Finally, the additional pixel data may be analyzed to determine if an area within the field of view of the image sensor is occupied (step 108). Step 108 may be used as a verification of step 102, or may be used to verify the continuing occupancy of the area within the field of view of the image sensor. Once again, details regarding analyzing the pixel data to determine occupancy therefrom can be found in the above-mentioned patent applications.

[0028] By obtaining pixel data for only the subset of pixels in the active pixel array such that the pixels not in the subset remain inactive, the temperature of the active pixel array can be kept much lower than if all of the pixels in the active pixel array were read out. This results in significant improvements in the signal to noise ratio of the pixels within the subset due to a reduction in thermal noise. Such improvements may be significantly evident in environments that are hot and dark, since signal to noise ratios in these environments tend to be highly unfavorable. Further, analyzing the pixel data to determine if a person has entered the field of view of the image sensor is far less computationally expensive due to the reduction in the total amount of pixel data to analyze.

Optionally obtaining and analyzing the additional pixel data may improve the reliability of detecting occupancy in this manner.

[0029] By choosing the subset of pixels in the active pixel array wisely, the efficacy of detection of a person entering the field of view of the image sensor can be very high. Notably, the subset of pixels may be chosen such that all of the pixels within the subset reside in a single contiguous area or such that the pixels within the subset are located in separate, discrete areas. In embodiments in which the pixels within the subset reside in a single contiguous area, such a contiguous area may be defined by a polygon containing any number of sides, and at least five sides (i.e., a non-rectangular shape) in some embodiments.

[0030] In one embodiment, the subset of pixels is chosen such that they reside along an outer border of the active pixel array as illustrated in Figure 4. Specifically, Figure 4 illustrates an exemplary readout pattern 36 for an active pixel array in which only the pixels along the outer edges of the active pixel array (illustrated by a group of shaded pixels) are read out, while the remaining pixels along the interior of the active pixel array remain inactive or idle. A person entering the field of view of an image sensor will necessarily first pass through an outer edge of the field of view before arriving in any other portion thereof.

Accordingly, by analyzing pixel data from only the pixels in an area along the outer edges of the active pixel array, one can easily detect persons entering the field of view of the camera using only a subset of the pixels therein.

[0031] In some embodiments, it may be desirable to choose the subset of pixels such that it resides around or near one or more ingress and/or egress points within the field of view of the image sensor. Accordingly, Figure 5 is a flow diagram illustrating a method for detecting occupancy using an image sensor according to an additional embodiment of the present disclosure. The method starts by determining a portion of a field of view of an image sensor that is near an ingress and/or egress point (step 200). The ingress and/or egress point may be a door, a hallway, or the like. In general, the ingress and/or egress point is one that a person must travel through in order to gain access to the remaining portion of the field of view. Next, a subset of pixels in an active pixel array that detect light in the area near the ingress and/or egress point is determined (step 202). This may involve a simple mapping of an area of the field of view to corresponding pixels in the active pixel array that detect light within this area. Next, pixel data is obtained from the subset of pixels in the active pixel array such that the pixels not in the subset remain inactive (step 204). As discussed herein, when a pixel is inactive, the supporting circuitry therein is off and thus the pixel is consuming minimal if any power and producing minimal if any heat.

Accordingly, obtaining the pixel data from the subset of pixels in the active pixel array involves reading out only the subset of pixels while allowing the remaining pixels to remain inactive. The pixel data is then analyzed to determine if a person has entered the field of view of the image sensor (step 206). Once again, details regarding analyzing the pixel data to determine occupancy therefrom can be found in the above-mentioned patent applications. If a person has entered the field of view, additional pixel data may optionally be obtained (step 208), where the additional pixel data contains pixel data for a larger portion of pixels in the active pixel array than in the subset of pixels. For example, the additional pixel data may contain pixel data for all of the pixels in the active pixel array. Finally, the additional pixel data may be analyzed to determine if an area within the field of view of the image sensor is occupied (step 210). Step 210 may be used as a verification of step 206, or may be used to verify the continuing occupancy of the area within the field of view of the image sensor. Once again, details regarding analyzing the pixel data to determine occupancy therefrom can be found in the above-mentioned patent applications.

[0032] Figures 6A and 6B illustrate exemplary readout patterns 36 for an active pixel array according to various embodiments of the present disclosure. With respect to Figure 6A, only those pixels in the lower left corner of the active pixel array (illustrated by a group of shaded pixels) are read out, while the remaining pixels are inactive or idle. Such a pattern may be effective, for example, when there is only one ingress and/or egress point to the field of view of the image sensor (e.g., a door leading to a room in which the image sensor is located) and it is in the lower left corner thereof. Notably, the subset of pixels forms a polygon including six sides. As discussed above, the subset of pixels may be chosen to occupy an arbitrary area defined by a polygon having any number of sides.

[0033] With respect to Figure 6B, only those pixels along a lower left edge, lower bottom edge, and lower right edge of the active pixel array (forming a“U” shape) are read out, while the remaining pixels are inactive or idle. Such a pattern may be effective, for example, when it is known that person must pass through the lower outside edges of the image frame before being present in any other portion of the field of view.

[0034] With respect to Figure 6C, the subset of pixels includes a first region of interest 38A in the lower left corner of the active pixel array and a second region of interest 38B on the right side of the active pixel array. Notably, the first region of interest 38A and the second region of interest 38B are not contiguous. While not shown, the first region of interest 38A and the second region of interest 38B may also be polygons having any number of sides. Further, while only the first region of interest 38A and the second region of interest 38B are shown, the subset of pixels may include any number of separate regions of interest that are either discrete or semi-contiguous without departing from the principles of the present disclosure. The pattern shown in Figure 6C may be effective, for example, when a person must pass through the lower left side of the field of view or the right side of the field of view before being present in any other portion of the field of view.

[0035] Conventional image sensors are not able to read out pixels in an active pixel array in an arbitrary pattern. The image sensor 10 discussed herein may include modifications thereto such that the pixel selection circuitry 16 is capable of selecting pixels in an arbitrary fashion in order to read out only a subset of the pixels in the active pixel array 12 such that the subset includes polygons having any number of sides and/or noncontiguous regions of interest.

[0036] The image sensor 10 may further be configured to read out the pixels in the subset of pixels in a continuous fashion such that there are no pauses for pixels that are not being read out. This may require the control circuitry 14 to compute and implement specialized timing for the various parts of the image sensor 10 specific to the subset of pixels such that the pixel data can be properly sampled. Accordingly, the control circuitry 14 may be configured to alter the timing of pixel selection by the pixel selection circuitry 16 in order to provide a proper read out of the subset of pixels. Further, the control circuitry 14 may be configured to change operating parameters of the sampling circuitry 18 and the analog-to-digital converter circuitry in order to properly digitize the pixel data from the subset of pixels. Continuously reading out the subset of pixels without pausing for those pixels that are not being read out may significantly lower read times and thus allow for increases in frame rate above and beyond that which is achievable by a conventional image sensor.

[0037] Finally, the control circuitry 14 may be configured to change operating parameters of the output register 22 such that the pixel data for the subset of pixels is properly arranged and thus communicated to external circuitry for analysis. In particular, the image sensor 10 may be configured to capture, store, and facilitate transfer of the pixel data for the subset of pixels as a sparse data structure that does not include reserved spots (i.e., blank spaces) for pixels that are not in the subset of pixels. This may allow for a much smaller data structure to be provided via the output 24, improving transmit times when provided to another device.

[0038] The image sensor 10 discussed herein may be incorporated into an intelligent lighting fixture 40 as shown in Figure 7. The intelligent lighting fixture 40 includes the image sensor 10, driver circuitry 42, communications circuitry 44, and a solid-state light source 46. The driver circuitry 42 is coupled to the image sensor 10, the communications circuitry 44, and the solid-state light source 46. The driver circuitry 42 may use the communications circuitry 44 to communicate with other devices such as other lighting fixtures within a distributed lighting network. Further, the driver circuitry 42 may control one or more light output parameters (e.g., brightness, color temperature, color rendering index, and the like) of the solid-state light source by providing one or more driver signals thereto. Finally, the driver circuitry 42 may obtain and analyze pixel data from the image sensor according to the methods discussed above with respect to Figure 3 and Figure 5 to determine and react to occupancy. By using pixel data for only a subset of pixels in the active pixel array 12 of the image sensor 10, the processing resources of the driver circuitry 42 may be significantly conserved. This may in turn lead to reduced power consumption of the lighting fixture 40, reduced cost due to the reduced processing requirements of the driver circuitry 42, and improved longevity of the driver circuitry 42.

[0039] The intelligent lighting fixture 40 may be one of many intelligent lighting fixtures 40 in an intelligent lighting network 48, as shown in Figure 8. The intelligent lighting fixtures 40 may communicate with one another in order to provide certain functionality such as responding to occupancy events. Each one of the intelligent lighting fixtures 40 may be configured to detect occupancy using the image sensor 10 as discussed above. Flowever, each one of the intelligent lighting fixtures 40 may be configured with a different read out pattern for the active pixel array 12 of the image sensor 10, such that the subset of pixels used to determine occupancy is different for different ones of the intelligent lighting fixtures 40.

[0040] In one embodiment, the intelligent lighting fixtures 40 may be configured to save processing resources by only requiring certain ones of the lighting fixtures 40 to detect occupancy when a space is unoccupied. In particular, those intelligent lighting fixtures 40 where a field of view of the image sensor 10 thereof includes an ingress and/or egress point to a space in which the intelligent lighting fixtures 40 are located may be tasked with detecting

occupancy when the space is currently unoccupied, while the remaining intelligent lighting fixtures 40 are not required to do so. Those lighting fixtures 40 where a field of view of the image sensor 10 thereof does not include an ingress and/or egress point to the space do not need to detect occupancy when the space is currently unoccupied, because a person cannot enter the space without first passing through an ingress and/or egress point thereto. When occupancy is detected by one of the intelligent lighting fixtures 40 tasked with detecting occupancy, the remaining intelligent lighting fixtures 40 may begin to detect occupancy as well in order to verify occupancy or more accurately or precisely determine occupancy.

[0041] Of those intelligent lighting fixtures 40 that are tasked with detecting occupancy in the intelligent lighting network 48 when the space is currently unoccupied, they may utilize the methods discussed above wherein only a subset of pixels in the active pixel array 12 of the image sensor 10 are used for doing so. The read out patterns for each one of the lighting fixtures 40 may be configured to accurately detect occupancy with minimal processing overhead as discussed above. The read out patterns may be determined by persons familiar with the space and programmed into the intelligent lighting fixtures 40, or may be determined by the intelligent lighting fixtures 40 themselves or a device in the intelligent lighting network 48 with access to sensor data from the intelligent lighting fixtures 40, for example, using learning algorithms.

[0042] Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.