Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CAMERA ASSEMBLY WITH AN INTEGRATED CONTENT ANALYZER
Document Type and Number:
WIPO Patent Application WO/2012/093381
Kind Code:
A1
Abstract:
A programmable integrated circuit for imagery data content analysis, the integrated circuit comprising: a programmable detection rule storage unit configured to store a set of detection rules; a communication interface enabling programming of said detection rule storage unit using an external device; and a content analyzer having circuitry for detecting, based on the set of detection rules, a pattern in imagery data, and for outputting a size-reduced version of the imagery data.

Inventors:
SHERAIZIN VITALY (IL)
TSIPIS FELIX (IL)
Application Number:
PCT/IL2011/000003
Publication Date:
July 12, 2012
Filing Date:
January 03, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SHERAIZIN VITALY (IL)
TSIPIS FELIX (IL)
International Classes:
G06K9/00
Foreign References:
US20100183199A12010-07-22
US20080101700A12008-05-01
Attorney, Agent or Firm:
WEBB, Cynthia et al. (P.O. Box 2189, Rehovot, IL)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A programmable integrated circuit for imagery data content analysis, the integrated circuit comprising:

a programmable detection rule storage unit configured to store a set of detection rules;

a communication interface enabling programming of said detection rule storage unit using an external device; and

a content analyzer having circuitry for detecting, based on the set of detection rules, a pattern in imagery data, and for outputting a size-reduced version of the imagery data.

2. The integrated circuit according to claim 1, wherein the outputting of the size- reduced version of the imagery data comprises omitting a part of the imagery data which is not part of the detected pattern. 3. The integrated circuit according to claim 1, wherein the outputting of the size- reduced version of the imagery data comprises transmitting a part of the imagery data which is not part of the detected pattern in a quality lower than a quality of imagery data being part of the detected pattern.

4. The integrated circuit according to claim 3, wherein the quality comprises resolution, color depth, bit rate, frame rate, compression, or any combination thereof.

5. An image-sensing integrated circuit configured to reduce its data output, the integrated circuit comprising:

an image sensor configured to capture an image and produce imagery data; and a content analyzer having circuitry for detecting a region of interest in the imagery data and for outputting, based on the detection, a size-reduced version of the imagery data.

6. The integrated circuit according to claim 5, being a monolithic integrated circuit.

7. The integrated circuit according to claim 5, wherein said image sensor and said content analyzer are each incorporated in a separate body, and are both stacked into said integrated circuit.

8. The integrated circuit according to claim 5, wherein the circuitry of said content analyzer is configured to detect the region of interest based on detection of a pre-determined pattern in the imagery data.

9. A method for reducing the size of imagery data output from a camera assembly, the method comprising:

transmitting imagery data from an image sensor of the camera assembly to a content analysis integrated circuit of the camera assembly;

detecting, using the content analysis integrated circuit, a pattern in the imagery data; and

producing, based on the detected pattern and using the content analysis integrated circuit, a size-reduced version of the imagery data. 10. The method according to claim 9, further comprising transmitting the size- reduced version of the imagery data, over an output channel, to a location external to the camera assembly.

11. The method according to claim 9, further comprising transmitting the imagery data from the image sensor to the content analysis integrated circuit over a high-speed data bus.

12. The method according to claim 1 1 , wherein the high-speed data bus is of a transfer speed sufficient to accommodate an entirety of the imagery data in real time.

13. A camera assembly configured to internally reduce its data output, the camera assembly comprising:

an image sensor configured to capture imagery data;

a content analysis integrated circuit connected to said image sensor and being configured to detect a region of interest in the imagery data and to output, based on the detection, a size-reduced version of the imagery data; and a communication interface connected to said content analysis integrated circuit and being configured to transmit the size-reduced version of the imagery data to a location external to the camera assembly.

14. The camera assembly according to claim 13, wherein said content analysis integrated circuit is configured to detect the region of interest based on detection of a pre-determined pattern in the imagery data.

15. The camera assembly according to claim 13, wherein said content analysis integrated circuit is connected to said image sensor using a high-speed data bus.

16. The camera assembly according to claim 15, wherein said high-speed data bus is of a transfer speed sufficient to accommodate an entirety of the imagery data in real time.

17. The camera assembly according to claim 13, wherein the imagery data comprises real-time imagery data.

18. The camera assembly according to claim 13, further comprising a circuit board having thereon at least one component selected from the group consisting of: said image sensor, said content analysis integrated circuit and said communication interface.

19. The camera assembly according to claim 13, wherein said content analysis integrated circuit is a field-programmable gate array (FPGA).

20. The camera assembly according to claim 1, wherein said content analysis integrated circuit is an application-specific integrated circuit (ASIC).

21. The camera assembly according to claim 1 , wherein said camera assembly further comprises a second image sensor connected to said content analysis integrated circuit.

Description:
CAMERA ASSEMBLY WITH AN INTEGRATED CONTENT ANALYZER

FIELD OF THE INVENTION

The invention relates to a camera assembly integrally including an image sensor and a content analyzer.

BACKGROUND OF THE INVENTION

Remote visual monitoring systems are widely used by the millions in many countries, and provide remote supervision of areas where additional observation is needed. Except from stationary surveillance systems, visual monitoring is utilized in applications such as UAVs (Unmanned Aerial Vehicles), AUVs (Autonomous Underwater Vehicles), land robots and more.

Typically, such remote monitoring systems include a set of dispersed cameras which deliver captured imagery data to a central monitoring station through a dedicated or a public network, wired or wireless. The central monitoring station often includes a storage device for storing the incoming imagery data, as well as a CPU (central processing unit) usually capable of performing a certain level of content analysis on the imagery data.

The CPU may employ advanced imagery analytic techniques, including pattern detection and/or facial recognition, which enable identifying and tracking individuals, vehicles and optionally other objects as they appear, move, or conduct any suspicious activity. In some cases, advanced imagery analytic techniques may enable autonomous operation of robots, such as finding objects, avoiding obstacles and the like.

However, since the majority of the data processing is performed, usually, at the central monitoring station, remote monitoring systems (especially ones having high resolution cameras and real time properties) may require a large amount of communication bandwidth between the individual cameras and the central monitoring station, as well as intense data processing resources. SUMMARY OF THE INVENTION

The invention pertains to a camera assembly integrally including an image sensor and a content analysis chip (FPGA, ASIC or the like), connected therebetween with a fast data connection. The data connection optionally enables a transfer speed sufficient to support the entire data output from the image sensor.

Optionally, the camera assembly has the content analysis features incorporated onto the same die as the image sensor.

The camera assembly offers a significant reduction of the communication bandwidth needs when transmitting the captured data from the camera. This is achieved by real-time pattern detection done already at the content analysis chip level, which filters out non-significant data. In addition, detecting patterns at the camera assembly itself may prevent the need to do so later, using central processing resources external to the camera assembly.

In addition, the camera assembly offers low power consumption. According to power consumption testing, the power consumption of the pattern detector core alone (not including memory, communication interface, and other peripherals) may consume approximately 0.75W (Xilinx Spartan-3ADSP FPGA chip). The power consumption of the whole camera assembly may be approximately 2.3 W. In comparison, running a similar Adaboost-based pattern detection algorithm on regular hardware (Intel E8600- 3.3GHz CPU providing 1.8fps 5Mpix, 8bpp using Intel OpenCV pattern detector library routine) will incur power consumption of approximately 50 W (CPU only, not including RAM, peripherals, etc.). Hence, performing the pattern detection at the pattern detection level is superior in terms of power consumption to doing so using a general-purpose computer.

The content analysis integrated circuit incorporates a pattern detection algorithm specially adapted to leverage the content analysis integrated circuit's hardware capabilities. Optionally, the pattern detection algorithm is optimized for efficient pipeline parallelism, so that all different chip hardware units are efficiently utilized.

The camera assembly, advantageously, may be suitable for both portable and fixed platforms, since the pattern detection algorithm analyzes each frame received from the image sensor separately, so that global motion of the camera does not affect the analysis.

The camera assembly may include an interface module enabling re-programming of the content analysis integrated circuit to detect different object types using the same basic pattern detection algorithm, by an update of its detection rules. Detection rules may be prepared by an external training utility, which is fed a series of images which serve as a basis for learning, and concludes the detection rules.

There is therefore provided, in accordance with an embodiment, a programmable integrated circuit for imagery data content analysis, the integrated circuit comprising: a programmable detection rule storage unit configured to store a set of detection rules; a communication interface enabling programming of said detection rule storage unit using an external device; and a content analyzer having circuitry for detecting, based on the set of detection rules, a pattern in imagery data, and for outputting a size- reduced version of the imagery data.

In some embodiments, the outputting of the size-reduced version of the imagery data comprises omitting a part of the imagery data which is not part of the detected pattern.

In some embodiments, the outputting of the size-reduced version of the imagery data comprises transmitting a part of the imagery data which is not part of the detected pattern in a quality lower than a quality of imagery data being part of the detected pattern.

In some embodiments, the quality comprises resolution, color depth, bit rate, frame rate, compression, or any combination thereof.

There is further provided, in accordance with an embodiment, an image-sensing integrated circuit configured to reduce its data output, the integrated circuit comprising: an image sensor configured to capture an image and produce imagery data; and a content analyzer having circuitry for detecting a region of interest in the imagery data and for outputting, based on the detection, a size-reduced version of the imagery data.

In some embodiments, the integrated circuit is a monolithic integrated circuit.

In some embodiments, said image sensor and said content analyzer are each incorporated in a separate body, and are both stacked into said integrated circuit.

In some embodiments, the circuitry of said content analyzer is configured to detect the region of interest based on detection of a pre-determined pattern in the imagery data.

There is further provided, in accordance with an embodiment, a method for reducing the size of imagery data output from a camera assembly, the method comprising: transmitting imagery data from an image sensor of the camera assembly to a content analysis integrated circuit of the camera assembly; dete cting, using the content analysis integrated circuit, a pattern in the imagery data; and producing, based on the detected pattern and using the content analysis integrated circuit, a size-reduced version of the imagery data.

In some embodiments, the method further comprises transmitting the size- reduced version of the imagery data, over an output channel, to a location external to the camera assembly.

In some embodiments, the method further comprises transmitting the imagery data from the image sensor to the content analysis integrated circuit over a high-speed data bus.

In some embodiments, the high-speed data bus is of a transfer speed sufficient to accommodate an entirety of the imagery data in real time.

There is further provided, in accordance with an embodiment, a camera assembly configured to internally reduce its data output, the camera assembly comprising: an image sensor configured to capture imagery data; a content analysis integrated circuit connected to said image sensor and being configured to detect a region of interest in the imagery data and to output, based on the detection, a size-reduced version of the imagery data; and a communication interface connected to said content analysis integrated circuit and being configured to transmit the size-reduced version of the imagery data to a location external to the camera assembly.

In some embodiments, said content analysis integrated circuit is configured to detect the region of interest based on detection of a pre-determined pattern in the imagery data.

In some embodiments, said content analysis integrated circuit is connected to said image sensor using a high-speed data bus.

In some embodiments, said high-speed data bus is of a transfer speed sufficient to accommodate an entirety of the imagery data in real time.

In some embodiments, the imagery data comprises real-time imagery data.

In some embodiments, the camera assembly further comprises a circuit board having thereon at least one component selected from the group consisting of: said image sensor, said content analysis integrated circuit and said communication interface. In some embodiments, said content analysis integrated circuit is a field- programmable gate array (FPGA).

In some embodiments, said content analysis integrated circuit is an application- specific integrated circuit (ASIC).

In some embodiments, said camera assembly further comprises a second image sensor connected to said content analysis integrated circuit.

BRIEF DESCRIPTION OF THE FIGURES

Exemplary embodiments are illustrated in referenced figures. Dimensions of components and features shown in the figures are generally chosen for convenience and clarity of presentation and are not necessarily shown to scale. The figures are listed below.

Fig. 1 shows a block diagram of a camera assembly;

Fig. 2 shows a semi-pictorial view of an image sensing IC;

Fig. 3 shows a schematic view of a content analyzer;

Fig. 4 shows a schematic view of a pattern detector; and

Fig. 5 shows a schematic view of a method for pattern detection;

Fig. 6 shows a schematic view of an in-line accumulative adder;

Fig. 7 shows a schematic view of a line-to-line accumulative adder;

Fig. 8 shows a schematic view of two rectangular regions inside an image; and

Fig. 9 shows a schematic view of a decision making algorithm.

DETAILED DESCRIPTION

An aspect of some embodiments relates to a camera assembly including an image sensor and a content analyzer formed as an integrated circuit (FPGA, ASIC or the like), which advantageously reduces the size of imagery data output from the camera assembly.

Usually, high-resolution surveillance cameras require a fast communication channel as well as vast computational capabilities at the receiving server due to wide bit stream and real time processing of high resolution imagery data.

The present camera assembly, advantageously, offers significant reduction of the communication bandwidth needs when transmitting the captured data from the camera. This is achieved by real-time pattern detection done internally at the content analyzer level, which determines the regions which include significant data. The term "significant data", as referred to herein, relates to imagery data of a region which includes a detected object, while the term "non-significant data" relates to imagery data of a region which does not include a detected object, and therefore may also be referred to as "background". The camera assembly transmits the significant data and optionally a reduced version of the non-significant data to the server, thereby requiring less communication bandwidth.

The reduction of communication bandwidth may also eliminate the need for video compressors and decompressors, which may increase the cost and complexity of the whole system and may reduce the imagery quality.

In addition, the server receives only relevant video data and does not need to perform additional pattern detection by itself. Therefore, the camera assembly may substantially reduce the data processing overhead at the server side and may enable a simple CPU to handle a large number of camera assemblies simultaneously.

The camera assembly is configured to handle real-time pattern detection and transmission of predefined patterns to central server with minimum overloading of the communication channels or the central server processing resources.

The content analyzer processes the entire imagery data captured by the image sensor and extracts only significant data for further processing. Alternatively, non- significant data may also be transmitted, but with relatively reduced settings. Therefore, the camera assembly enables the use of low capability systems, which include low bandwidth and weak CPU based systems, in order to process a substantially higher imagery resolution and enables these systems to reach high- resolution real-time performances.

The camera assembly, advantageously, may be suitable for both portable and fixed platforms, since the pattern detection algorithm analyzes, separately, each frame received from the image sensor and therefore data analysis remains unaffected by the global motion of camera, and renders any motion analysis tools redundant . However, some additional processing and/or analysis may still be performed at the server level.

The content analyzer may be incorporated within an integrated circuit (FPGA,

ASIC or the like) that performs pattern detection and sending relevant imagery data to the server. The content analyzer may be connected to the image sensor using a high speed data connection, which may enable a transfer speed sufficient to support the entire real-time data output from the image sensor.

Optionally, the camera assembly may have the content analyzer features incorporated in the same integrated circuit as the image sensor, thus allowing higher performance, lower power consumption and lower manufacture cost.

The content analyzer uses pattern detection concurrent algorithm designed to exploit the parallelism capabilities of the integrated circuit (FPGA, ASIC or the like).

In addition, the content analyzer may incorporate a pattern detection pre- processing stage for sub sampling high resolution images in order to expedite the pattern detection.

The content analysis integrated circuit incorporates a pattern detection algorithm specially adapted to leverage the content analyzer hardware capabilities. Optionally, the pattern detection algorithm is optimized for efficient pipeline parallelism, so that all different integrated circuit hardware units are efficiently utilized.

In addition, the content analyzer may be configured to support multiple pattern detection processes simultaneously.

In an embodiment, the pattern detection algorithm may be based on Adaboost, a supervised machine learning meta-algorithm (Yoav Freund and Robert E. Schapire, "A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting". In Computational Learning Theory: Eurocolt 1995, pages 23-37) and Haar- like feature pattern characterization methodology (P. Viola, M. Jones, "Robust Realtime Object Detection", I JVC 2001 ; Papageorgiou, M. Oren and T. Poggio, "A General Framework for Object Detection". International Conference on Computer Vision, 1998.). Haar-like features are an alternate digital feature set based on Haar wavelets instead of the usual image luminance data.

The pattern detection algorithm may use rectangular-based features based on a Haar-like function. A rectangular-based feature value is determined by two rectangular regions inside an image and calculated as the difference between the sums of pixels in these rectangular regions.

An "integral image", in this regard, may be defined as a two-dimensional matrix having the same size as the original image (see Viola, id). Each element of the integral image contains the sum of all pixel values located on the upper left region of the original image (in relation to the element's position). This allows simple computing of a sum of pixel values inside a rectangular area in the original image, at any position and/or rectangular dimensions, using only four simple arithmetic calculations:

sum = pt 4 - pt 3 - pt 2 + pti (1) where points pt„ belong to the integral image, such that pti corresponds to the top left vertex point of the rectangle, pt 2 to top right vertex point, pt 3 to the bottom left vertex point and p to the bottom right vertex point.

Accordingly, the integral image II(X) is defined as follows:

iKx. y =∑x'≤ x <y x x'. y' (2) where X(x,y) is a grayscale image and x,y refer to image column and image row, respectively.

Another basic component of Adaboost algorithm used in the pattern detection algorithm is a weak classifier. The weak classifier has a binary output for determining whether a pattern is found in within a given image.

A threshold may be used in order to generate a weak classifier from a given feature.

In addition, a threshold may be determined using image weights, which are being recalculated during every round of content analyzer training

Optionally, pattern detection algorithm may incorporate image normalization to overcome any inhomogeneous illumination conditions.

The camera assembly may include an interface module used for configuration of the content analyzer using a procedure referred to as "training". The training procedure may be performed using an external computing device, for re-configuring the content analyzer to detect various object types using the same algorithm by an update of its detection rules. The detection rules may be prepared by a training utility application.

Training utility application may use a set of "positive" images having pattern examples to be detected and a set of "negative" images, which are missing pattern examples (background images) to calculate predefined features, based on differences between pattern and background images. Predefined features may include a set of unique image properties, existing in patterns, and missing in background images.

Training utility application automatically may select the most discriminative features from a set of Haar-like rectangular features and output a set of pattern detection rules, required for specific pattern detection. Reference is now made to Fig. 1, which shows a camera assembly 100 in a schematic view, according to an embodiment. Camera assembly 100 advantageously includes a content analysis electronic device, also referred to as "integrated circuit" 110.

As schematically shown, camera assembly 100 may include a lens 102 configured to focus an image frame 114 onto an image sensor 104. Image sensor 104 is configured to capture image frame 114 and produce an imagery data 122.

Image sensor 104 may be either black and white or color multi-megapixel sensor, such as CCD (charge-coupled device) sensor, CMOS(complementary metal-oxide- semiconductor) sensor and the like. Image sensor 104 contains a sensing surface having an array of photosensitive elements and may optionally be equipped with an optical system such as lens 102 for optically focusing a light flow onto the image sensor's 104 sensing surface.

Image sensor 104 may be configured to deliver imagery data 122 including pixel data in various spatial resolutions in accordance with the image sensor 104 mode and tuning parameters.

Image sensor 104 may be configured to operate in binning mode and in window mode.

When operated in binning mode, image sensor 104 may be configured to capture the optical scene with greatest possible field of view limited only by the optical parameters and reduce the imagery resolution by combining adjacent imager pixels to produce one output pixel. Furthermore, when operated in binning mode, image sensor 104 may be configured to transform (scale-down) the whole imagery pixels array to a smaller pixels array having a lower resolution imagery data. Therefore, when operating imager sensor 104 in binning mode, the whole image is a subject to pattern detection scanning.

When operated in window mode, image sensor 104 may be configured to transfer only a sub-array of pixels from the imagery data 122 pixels.

For simplicity of presentation, the following discussion refers to the detection of a single pattern pertaining to a single object. However, multiple patterns pertaining to multiple objects may be detected, simultaneously or serially, in the same imagery data, by applying essentially the same unitary process as many times as needed. Image sensor 104 may transfer imagery data 122 via a high speed bus 128 to a content analyzer 110. A content analyzer 110 may be configured to detect a pattern in imagery data 122 transferred from image sensor 104. In addition content analyzer 110 may be configured to calculate the spatial coordinates of the detected object 116 and determine ROI 120 (region of interest) based upon the detected pattern, and acquire the imagery data 122 of the detected pattern from the image sensor 104 by tuning the image sensor 104 with the noted above coordinates. In addition, content analyzer 110 may be configured to create an imagery data size-reduced version based on coordinates of detected object 116

In addition, content analyzer 110 may use the high-speed bus 128 to send control commands and tuning parameters to the image sensor 104.

Content analyzer 110 may be configured to operate in pattern detection mode and pattern acquisition mode.

When operated in pattern detection mode, the content analyzer 110 configures the image sensor 104 to operate in binning mode receiving the whole imagery data 122 pixels array.

When operated in the pattern acquisition mode, content analyzer 110 may configure the image sensor 104 to operate in window mode receiving only a sub-image of the image frame 114 according to the coordination of the detected pattern. In case of more than one detected pattern in the same image frame 114, additional sub-images may be captured.

This makes possible acquiring the pattern image with the maximum possible spatial resolution limited only by the optical parameters of lens 102 and the image sensor 104 resolution. Additional limitation can result from the bandwidth capabilities of communication interface 112, communication channel 130 or central server 128 to support the required amount of data transfer.

A communication interface 112 is connected content analyzer 110 and configured to transmit imagery data size-reduced version to a location external to the camera assembly 100, such as central server 128.

Optionally, camera assembly 100 include one or more lenses in addition to lens

102, for example, lens 106, which may be similar to lens 102, configured to focus an image frame 114 onto a second image sensor 108, which may be similar to Image sensor 104, and configured to capture image frame 114 and produce an imagery data 122.

Communication interface 112 may be further configured to transfer commands from a location external to camera assembly 100, via a communication channel 130, to content analyzer 110. In this regard, communication channel 130 may be configured for inbound transmission, not only for output. The external location may be, for example, a central server 128. Communication interface 112 may transfer digital data containing, for instance, detected object 116 coordinates and imagery data 122 from content analyzer 110 to an external location, for example a central server 128.

Reference is now made to Figs. 2, which shows an image-sensing chip (also

"integrated circuit") 200, according to an embodiment. Image-sensing chip 200 advantageously includes a content analyzer 210 incorporated into the same integrated circuit as an image sensor 204, and, optionally, a communication interface 212; this allows reaching even better performance. Alternatively, each of content analyzer 210 and image sensor 204 may be incorporated in separate bodies of the integrated circuit (namely, in physically-separated integrated circuits inside the main integrated circuit), which are stacked together inside image-sensing chip 200, such as in horizontal layers or differently.

Content analyzer 210, image sensor 204 and/or communication interface 212 are optionally similar in function to content analyzer 110, image sensor 104 and/or communication interface 112, respectively, of Fig. 1. Other elements shown in this figures, such as a lens 202, an image frame 214, a detected object 216, a region of interest 220, a high-speed data bus 228, a data channel 226 and/or an output channel 230, may be similar to the respective elements shown in Fig. 1, which bear corresponding numbers in the 100's range, and are discussed above. One or more of this elements are optionally adjusted, in form and/or in size, to conform to the requirements of being incorporated in a single integrated circuit chip, such as image- sensing chip 200.

Reference is now made to Fig. 3, which shows a block diagram of content analyzer 110 of Fig. 1 or content analyzer 210 of Fig. 2, according to an embodiment. In this figure, the content analyzer is referenced with the number 310.

A manager 334 may be configured to provide synchronization, general control and/or parameter distribution within and between different elements of content analyzer 310, in accordance with the desired operation scenario. In addition, manager 334 may be configured to receive commands from a command parser 346, which may in turn be configured to interpret commands received from an external source, such as central server 128 appearing in Fig. 1, over an output channel 330.

A shared memory 332 may be configured to store binary data including imagery data received from imagery data pre-processor 336 and downscaled imagery data received from spatial scaler 338.

Additionally, shared memory 332 may be configured to supply downscaled imagery data to pattern detector 340 and a size-reduced version of the imagery data to pattern transmitter 342. Shared memory 332 may incorporate one or more direct memory access (DMA) controllers configured to expedite data transfer. Usually, DMA controllers allow effective concurrent read and write operations for each channel simultaneously. An addresses space of shared memory 332 may be shared with all DMA channels in order to provide data sharing. In addition, manager 334 may supply general tuning parameters to shared memory 332 including initialization parameters.

A Pattern acquisitioner 344 may be configured to transfer tuning parameters from manager 334 to an image sensor 304, in order to tune image sensor 304 according to desired sensing mode.

In addition, manager 334 may be configured to deliver tuning parameters to pattern acquisitioner 344, including image sensing resolution parameters such as window height and windows width, shutter speed, desired image sensing mode such as binning mode or window mode, a start/stop command and/or the like.

Imagery data pre-processor 336 may be configured to receive imagery data from image sensor 304. Image sensor 304 may transfer color imagery data, a combined representation of color pixel values where color and luminance information may be combined using a color format such as Bayer RGB format. Imagery data processing may be done using only luminance information, therefore color imagery data may require additional color and luminance segregation.

In addition, manager 334 may deliver tuning parameters to imagery data pre- processor 336, including imagery resolution specified by width and height (in number of pixels) and shared memory 332 address for separated luminance and chrominance imagery data storage location. Imagery data pre-processor 336 may be configured to convert imagery data into separated luminance and chrominance imagery data format in order to support consecutive imagery data processing.

Imagery data pre-processor 336 may transfer separated luminance and chrominance imagery data to a shared memory 332 which may be configured to store binary data and may include one or more DMA channels.

Additionally, imagery data pre-processor 336 may include the image sensor "glue-logic" components (not shown) providing low-level functions like clocking and/or synchronization of image sensor 304.

A spatial scaler 338 may be configured to provide tunable spatial resolution scaling of imagery data. In addition, manager 334 may deliver tuning parameters to spatial scaler 338, including a scaling factor. The scaling factor may optionally have a range of 20% to 100% of the input imagery data resolution. The resulting downscaled imagery data may be stored in a shared memory 332.

A pattern transmitter 342 may be configured to transfer the detected patterns coordinates and imagery data size-reduced version to a communication interface 312, optionally as a composite data stream. The detected pattern's coordinates may be supplied by manager 334 and an imagery data size-reduced version may be supplied by shared memory 332.

In addition, manager 334 may be configured to deliver tuning parameters to pattern transmitter 342 including start/stop data transfer command, graphical data segment address in shared memory 332 and/or the detected pattern's coordinates.

Pattern detector 340, which is further discussed below, is a major component of content analyzer 310. Pattern detector 340 may be configured to detect a pattern in imagery data and calculate the detected object's spatial coordinates.

In addition, manager 334 may deliver tuning parameters to pattern detector 340 including CCT (classifier constants table), PC SUM (sum of positive constants), CCT LENGTH (CCT length) and/or start stop commands.

Pattern detector 340 may be configured to transfer acknowledge signals to manager 334, including current state flag (CCF) carries busy-ready status of pattern detector 340 , pattern found flag (PFF), and/or found pattern's coordinates listing (FPCA). Reference is now made to Figs. 4, which shows a block diagram of pattern detector 340 of Fig. 3, referred to here as pattern detector 440, according to an embodiment.

Pattern detector 440 may incorporate a pattern detection algorithm based on the Adaboost algorithm. The algorithm is presently adapted to support effective parallelism and pipelined operation over ASIC/FPGA based platform. The algorithm adaptation increases performance effectiveness and results a compact footprint and low power consumption.

A shared memory 432, which is similar to shared memory 332 in Fig. 3, may be configured to deliver the luminance part of the imagery data to an imagery data storage RAM (Random Access Memory) 474 which may be configured to store a segment of imagery data.

Imagery data storage RAM 474 may be configured to re-arrange imagery data into sub image segments having a certain pixel size, such as 32 pixels wide and 32 pixels high, which contains pixel values equal to the source imagery data, and ordered from left to right and from top to bottom of the sub image.

A scheduler 486 may be configured to enable or disable the data writing process by sending data write enable signal.

In order to reach high level of parallelization calculations process, a wide data representation may be required.

A data sequencer 450 may be configured to receive, convert and re-arrange the imagery data as a set of sub images segments into a sequence of data words. Each data word contains a multi-pixel luminance value providing a wide data representation required to achieve a high level of parallelization.

Two exact copies of the set of sub images segments may be passed to two parallel paths of the calculation flow. The first calculation path is configured to calculate an integral image and basic points extraction, while the second calculation path may be configured to extract and calculate normalization parameters. Both paths may be configured to work simultaneously, and therefore exploit the ASIC/FPGA hardware capabilities.

Reference is now made to Fig. 6, which shows a schematic view of in-line accumulative adder 462 appearing in Fig. 4. an in-line accumulative adder 462 may be configured to calculate a plurality of cumulative sums of each multi-pixel word, which result (PPi) a sum of each one multi-pixel (Pi) to its neighbor (excluding ppl).

Reference is now made back to Figs. 4. A delay equalizer 464 may be used to create data delay equalization and latch result of previous iteration of in-line accumulative adder 462. The equalized signal (PP ) may be transferred to a line-to- line accumulative adder 466, which may be configured to calculate a plurality of cumulative sums of neighborhood multi-pixel words.

Reference is now made to Fig. 7, which shows a schematic view of line-to-line accumulative adder 466 appearing in Fig. 4.

A line-to-line accumulative adder 466 may be configured to sum both current result from in-line accumulative adder 462 and previous result latched by delay equalizer 464.

PN ! = PP{ (7) PN 2 = PP + PN ! (8) N3 = PP 2 e + PN 2 (9)

Reference is now made back to Fig. 4. The result (PNi) of line-to-line accumulative adder 466 may be passed to a multi-port RAM 468 that may include an input port and multiple output ports. Multi -port RAM 468 ma y be configured to provide simultaneous access to stored integral image.

A detection rule storage 484 may be configured to store a variety of binary data including, CCT values, wherein CCT may include a plurality of detection rules data including PC (positive constants), NC , THD f , BBR, BBL, BTR, BTL, ABR, ABL, ATR, ATL.

Reference is now made to Fig. 8, which shows a schematic view of two rectangular regions inside an image. A basic points extractor 470 appearing in Fig. 4, may be configured to acquire the basic points of the integral image from a detection rule storage 484 appearing in Fig. 4. The basic points may include ATL 822, ATR 824, ABL 826 and ABR 828 - coordinates of rectangle A 820; and BTL 842, BTR 844, BBL 846 and BBR 848 - coordinates of rectangle B 840.

Reference is now made back to Fig. 4. At the second calculation path, an exponentiator 452 may be configured to raise a pixel value to the second power. The resulting value may pass to a calculation path, which may be partially similar to the calculation path of the first path and may be executed simultaneously with the first path.

An in-line accumulative adder 454, which is similar to in-line accumulative adder 462, may be configured to calculate a plurality of cumulative sums of each multi-pixel word, which result a sum of each one multi-pixel to its neighbor.

A delay equalizer 456, which is similar to delay equalizer 464, may be used to create a data delay equalization and latch result of previous iteration of in-line accumulative adder 454. The delayed signal may be transferred to a line-to-line accumulative adder 458, which may be configured to calculate a plurality of cumulative sums of neighborhood multi-pixels words.

Additionally, line-to-line accumulative adder 458 may be configured to sum both current result from in-line accumulative adder 454 and previous result latched by delay equalizer 456.

The result of the line-to-line accumulative adder 458 may be delivered to a normalization parameters extractor 460 which may be configured to calculate normalization parameters, which are mean value (μ) and standard deviation value (<r).

A normalizer 472 may be configured to normalize basic points delivered by basic points extractor 470, using the normalization parameters passed by normalization parameters extractor 460 in accordance to the following equation:

ΝΡ = (Ρ - μ)/σ (1 1 ) where P is: PN A TL , PN AT R , PN A BL , PN AB R , PN BTL , PN BT R , PN BBL , PN BBR and NP is: PN N ATL , , PN N BTL , PN"BTR , PN N BBL . PN N BBR .

The resulting normalized basic point values are applied to the differentiator 482. At the differentiator 482, which may be configured to calculate classifier rectangles differences in accordance to the following equation: BBL ) (12) where D corresponds to a feature raw value 5, described in the following equations (13,14):

S ∑y}<y<yl V) O 3 )

Χ η = (Χ - μ)/ σ (14) The classifier rectangles differences calculation value (referred as D in equation

12) may be applied to the comparator 480, which may be configured to compare the classifier rectangles differences calculation value with THD f value received from detection rule storage 484. Comparator 480 may be additionally configured to receive values from detection rule storage 484 and output an appropriate value (referred as RES in equation 15) to accumulator 478 in accordance with the comparison result

The comparator 480 output value is delivered to the accumulator 478, where the values are cyclically accumulated (referred as ACC in equation 15) and at the same time the sum of remaining positive constants (referred as PS in equation 16) is calculated. Wherein sum of remaining positive constants starting value (referred as PC SUM in equation 16) is received from detection rule storage 484.

ACC = ACC + RES (15) wherein ACC starting value is 0.

PS = PS - PC (16) wherein PS starting value is PC SUM.

Accumulator 478 may be additionally configured to receive a reset signal for cyclically accumulation value and for sum of remaining positive constants from scheduler 486

Reference is now made to Fig. 9 which shows a schematic view of decision making algorithm.

A decision maker 476 appearing in Fig. 4 may be configured to receive cyclically accumulated value and sum of remaining positive constants and output a PDDR (pattern detection decision result) which may include "FOUND", "CONTINUE" or "STOP" values, to scheduler 486 appearing in Fig. 4.

Scheduler 486 may include a counter which may be configured to reset by a start command and to increment by "FOUND" or "STOP" values of PDDR.

Scheduler 486 which may be configured to output result signals including a CCF (Current State Flag) which is active while the PDDR equals to CONTINUE, and inactive otherwise, a PPF (Pattern Found Flag) which is active while the PDDR equals to "FOUND", and inactive otherwise, a FPCA (Found Pattern Coordinates) which equals the latched value of scheduler 486 counter, a scheduler 486 counter value, a reset signal for cyclically accumulation value and for sum of remaining positive constants

In addition detection rule storage 484 may be configured to receive scheduler 486 counter value.

Reference is now made to Figs. 5, which shows a flow chart of a method 500 for reducing the size of imagery data output from a camera assembly, according to an embodiment.

In a block 502, the manager tunes detection parameters, providing basic tuning and required preliminary condition setup for different parts of content analyzer.

In a block 504, the command parser checks if external commands have arrived from the central server; if external commands have arrived, pattern detection parameters update accordingly.

In a block 506, the pattern acquisitioner configures the image sensor to operate in binning mode, the imagery data pre-processor receives imagery data from the image sensor, and produces and saves downscaled imagery data to the shared memory. The manager configures the spatial scaler to a desired scaling factor, and the pattern detector performs pattern detection over downscaled imagery data.

In a block 508, the pattern acquisitioner configures the image sensor to operate in window mode with coordinates and resolution according to the detected pattern coordinates and size. The imagery data pre-processor is configured to store a sub image into shared memory, while the pattern transmitter transmits imagery data size- reduced version and detected pattern's coordinates to the central server.

In a decision block 510, it is decided whether multiple patterns have been detected. If so, method 500 returns back to block 508. If, on the other hand, multiple patterns have not been detected, method 500 proceeds to a decision block 512.

In decision block 512, it is decided whether an external update is available from the central server through the command parser. If an external update is available, method 500 proceeds to decision block 514. If not, method 500 returns to block 506.

In a decision block 514, it is decided whether an external update includes a command to stop pattern detection. If the external update includes such a command, then method 500 stops. If the external update does not include a command to stop pattern detection, then method 500 returns to block 502.

In the description and claims of the application, each of the words "comprise" "include" and "have", and forms thereof, are not necessarily limited to members in a list with which the words may be associated.