Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM AND METHOD FOR DETERMINING THE LOCATION AND OCCUPANCY OF WORKSPACES
Document Type and Number:
WIPO Patent Application WO/2017/072158
Kind Code:
A1
Abstract:
A system and method are provided for identifying workspaces in an area and for determining the occupancy of the workspaces. A vision sensor arrangement is used for capturing an image or images of the area over time. For the images, occupancy information over time in the area is analyzed so that workspace regions can be automatically recognized based on the historical occupancy information. The current occupancy of the workspace regions can be determined based on current or recent occupancy information.

Inventors:
PANDHARIPANDE ASHISH VIJAY (NL)
CAICEDO FERNANDEZ DAVID RICARDO (NL)
Application Number:
PCT/EP2016/075764
Publication Date:
May 04, 2017
Filing Date:
October 26, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PHILIPS LIGHTING HOLDING BV (NL)
International Classes:
G06K9/00; G05B15/02
Domestic Patent References:
WO2015063479A12015-05-07
WO2015063479A12015-05-07
Foreign References:
US20080008360A12008-01-10
US20140093130A12014-04-03
US20130113932A12013-05-09
DE4308983A11994-09-22
US8228382B22012-07-24
Attorney, Agent or Firm:
TAKKEN, Robert, Martinus, Hendrikus et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. A system for identifying workspaces in an area, for determining the occupancy of the workspaces, comprising:

a vision sensor arrangement (30,32) for capturing an image or images of the area over time;

an image processor (30a,32a) adapted to determine occupancy information in the area;

an occupancy processor (36) adapted to analyze the occupancy information over time, to:

identify workspace regions, based on the historical occupancy information; and

identify current occupancy of the workspace regions based on current or recent occupancy information,

wherein the image processor (30a,32a) is provided at the respective vision sensor such that the vision sensor and the image processor form a vision sensor module, and the occupancy processor comprises a central processor.

2. A system as claimed in claim 1, wherein the image processor is adapted to generate an image which comprises a block pixel image of the area, each block pixel representing an area region and providing occupancy information, and wherein the occupancy processor (36) is adapted to identify workspace regions, based on the historical occupancy information which includes historical occupancy movement information within the area regions.

3. A system as claimed in claim 2, wherein the image processor (30a,32a) is adapted to determine a level of change of the images over time within each area region, and to derive the occupancy information based on the level of change.

4. A system as claimed in claim 2 or 3, wherein the image processor (30a,32a) is adapted to classify movements and movement velocities for the detected occupant or occupants. 5. A system as claimed in any one of claims 2 to 4, wherein the image processor

(30a,32a) is adapted to provide a metric for each area region of the image which represents the current or recent level of movement.

6. A system as claimed in any one of claims 2 to 5, wherein the image processor (30a,32a) is adapted to determine a level of change of the images over time in each area region based on the variance of the location of an occupant within the area region over a preceding time period.

7. A system as claimed in one of claims 2 to 6, wherein the processor is adapted to identify quasi-static occupancy of an area region and moving occupancy, and is adapted to identify workspace regions based on historical quasi-static occupancy identifications.

8. A system as claimed in any preceding claim, comprising sensors for sensing signals from portable devices of an occupant of the area to assist in the location of workspace regions.

9. A system as claimed in any preceding claim, further comprising a request processor which is adapted to receive a request for identification of a vacant workspace and to allocate a non-occupied workspace region. 10. A lighting system, comprising:

a set of luminaires for mounting in an area;

a lighting controller; and

a system for determining the occupancy of workspaces as claimed in any preceding claim,

wherein the lighting controller is adapted to control the lighting in dependence on the determined occupancy.

1 1. A method for identifying workspaces in an area and for determining occupancy of the workspaces, comprising: capturing an image or images of the area over time using a vision sensor arrangement;

using an image processor which is provided at the vision sensor, such that the vision sensor and the image processor form a vision sensor module, to determine occupancy information in respect of the area;

using a central processor to analyze the occupancy information over time thereby to:

identify workspace regions, based on the historical occupancy information; and

identify current occupancy of the workspace regions based on current or recent occupancy information.

12. A method as claimed in claim 1 1, comprising using the image processor to generate an image which comprises a block pixel image of the area, each block pixel representing an area region and providing occupancy information, and using the occupancy processor (36) to identify workspace regions, based on the historical occupancy information which includes historical occupancy movement information within the area regions.

13. A method as claimed in claim 12, comprising identifying quasi-static occupancy of an area region and moving occupancy, and identifying workspace regions based on historical quasi-static occupancy identifications.

14. A method as claimed in claim 1 1, 12 or 13, comprising sensing signals from portable devices of an occupant of the area and using the sensed signals to assist in the location of workspace regions.

15. A method as claimed in any one of claims 1 1 to 14, further comprising receiving a request for identification of a vacant workspace and allocating a non-occupied workspace region.

16. A computer program comprising code means which is adapted, when the program is run on a computer to implement the method of any one of claims 1 1 to 15.

Description:
A system and method for determining the location and occupancy of workspaces

FIELD OF THE INVENTION

The present invention relates generally to the field of occupancy detection, and more particularly to an occupancy detection system and a corresponding method suitable for determining the number of people within a workspace area.

BACKGROUND OF THE INVENTION

It is known to use occupancy detection as an input signal for a control system. For example, occupancy detection can be used for automatically control lighting and ventilation systems, or heating, ventilation, and air condition systems (HVAC). Occupancy detectors are used to maximize correct, efficient and timely delivery of light and air in the environment.

A main concern of occupancy detectors for lighting control in such presence or occupancy controlled systems is to ensure that lighting is promptly switched on when a person enters a given environment. Cheap and efficient solutions to deliver on this goal consist of Passive Infrared Sensors (PIR), RADARs or SONARs. These are able to quickly detect movements in the environment.

One limitation in this type of sensing approach lies in the lack of sensitivity to small movements. In office environments where workers can remain largely immobile for large periods or time, e.g. reading, typing, etc., these sensors may erroneously signal an empty room to the control system. This is due to the fact that such sensors signal an empty room after no movement has been recorded over the last period, the duration of that period which can usually be set by the user. This error is very disruptive and frustrating for example when lighting is mistakenly switched off.

Furthermore, this type of system also does not enable a count of the number of people in an area to be provided.

There are a number of applications in which a count of people over a particular area is required. For example, in marketing analysis, a people count is needed as one of the input data for analysis. For space optimization, a count of people in (pseudo) real time is needed to identify temporal and spatial usage patterns. Traditional methods of counting people, for instance used in space

management, often employ a human to manually count the number of people in different rooms of a building. However, this method is inaccurate since it provides a time snapshot and is expensive due to the involved manpower. A simple snapshot cannot reliably be used to identify workspace occupancy.

Other approaches utilize sensors, such as seat sensors to estimate the number of occupants seated. However, the use of such dedicated sensor modalities is expensive since extensive modification to the furniture or region is required.

There are systems which perform a people counting function based on image analysis, for example by using a single vision sensor, and then processing the entire image. This type of processing is suitable in surveillance applications, but not in indoor applications where privacy aspects (both regulatory and perceived) are a concern. In other works, multiple vision sensors have been used, primarily in outdoor surveillance applications, again by considering techniques wherein entire images are used for people counting.

For example, a people counting approach using a single vision sensor has been suggested in US 8 228 382. The image processing involves analyzing an edge pattern of a general shape of a human head, and utilizing it to search a captured image in a grey scale and hue of the captured image, and detecting a human head based on detection of a strong edge gradient around the edge pattern.

The processing of images to identify people is computationally intensive and it requires the transmission of large amounts of data, including data which has personal or private content.

There is thus a need for a system which avoids complicated and expensive image processing and which also does not rely on the use of personal data or require large volumes of data to be analyzed.

WO 2015/063479 discloses a system for controlling room lighting which includes occupancy detection. It uses presence detection and motion detection to work out where workstations are located, so that lighting can then be controlled accordingly.

SUMMARY OF THE INVENTION

The invention is defined by the claims.

According to examples in accordance with an aspect of the invention, there is provided a system for identifying workspaces in an area and for determining the occupancy of the workspaces, comprising: a vision sensor arrangement for capturing an image or images of the area over time;

an image processor adapted to determine occupancy information in the area; an occupancy processor adapted to analyze the occupancy information over time, to:

identify workspace regions, based on the historical occupancy information; and

identify current occupancy of the workspace regions based on current or recent occupancy information,

wherein the image processor is provided at the respective vision sensor such that the vision sensor and the image processor form a vision sensor module, and the occupancy processor comprises a central processor.

This system makes use of occupancy information based on captured images over time. The information is used both to detect the presence of an occupant, but also to derive the location of workspaces based on historical information. In this way, the system does not need to be programmed with the workspace locations, but it can instead learn where they are. A workspace may be identified where there is quasi-static movement, by which is meant that there is local movement within that region of the image, but the occupancy in the region stays static for prolonged times. This corresponds to a user sitting at a desk but moving by small amounts.

The occupancy information for example comprises information about an occupant in the form of one or more attributes such as location, speed, orientation etc.

By processing regions of an image, the amount of data that needs to be transmitted to and processed by the processor is reduced. Each region for example corresponds in real space to a dimension of a person (e.g. when viewed from above, the area of a person's head). Thus, the different regions are regions where a person may be seated. A person may occupy several such regions (i.e. they are smaller than the size of a person), or they may be smaller than the area of a region. The amount of data is still much less than for a high resolution pixelated image. The regions form a low resolution block pixel image. This also means that the block pixel image conveys less sensitive information, such as the identity of people, or the content of documents they may be carrying.

By providing the image processor and the vision sensor as a module, image processing is applied to the images at the source to remove some or all content with privacy implications. The vision sensor module outputs information which has less personally sensitive content, and this is then made available to and used by the central processor.

The vision sensor module may be designed not to have a raw image output.

In this way, the vision sensor module is designed so that no personally sensitive information can be output.

The occupancy processor can be distributed for example such that the identifying of workspaces can be done on a different device than the identifying of current occupancy. For example, a central device identifies where the workspaces are and the local processor receives information on these regions and then determines if they are free.

The image processor is for example adapted to generate an image which comprises a block pixel image of the area, each block pixel representing an area region and providing occupancy information in respect of that area region, and wherein the occupancy processor (36) is adapted to identify workspace regions, based on the historical occupancy information which includes historical occupancy movement information within the area regions.

Thus, the area regions define coarse locations where a user may be present, and movement within the regions is used to identify whether there is small movement within the region or larger movement across or through the region. When a region where there is movement changes, this indicates movement across a larger area, rather than movement within an area. In this way, a user may be tracked.

The processor is for example adapted to determine a level of change of the images over time within each region, and to derive the occupancy information based on the level of change.

The current or recent change level information may for example relate to movement over a previous number of seconds, minutes or hours, whereas the historical information for example relates to previous days, weeks or even months.

The processor may also be adapted to classify movements and movement velocities for the detected occupants. For example, there may be static, dynamic or quasi- dynamic events.

The vision sensor arrangement may comprise a plurality of vision sensors.

They may have overlapping or non-overlapping fields of view. Since the system does not need to be programmed with the workspace locations, it may be that a field of view of one vision sensor covers no workspace regions or one or many. The processor is for example adapted to provide a metric for each determined occupant based on the current or recent level of movement.

In this way, the processor generates a block map of movement information. It does not need to provide any personally sensitive information about any particular occupants of the area.

The processor is for example adapted to identify quasi-static occupancy of a region and moving occupancy, and is adapted to identify workspace regions based on historical quasi-static occupancy identifications.

The system may comprise RF sensors for sensing signals from portable devices of an occupant of the area to assist in the location of workspace regions.

The system may further comprise a request processor which is adapted to receive a request for identification of a vacant workspace and to allocate a non-occupied workspace region. If there are multiple non-occupied workspace regions, they may be allocated based on proximity, or based on the level of already existing crowding. The allocation is conveyed to the user, and the system records that the allocation has been made (so that the same workspace will not be allocated immediately afterwards). Examples in accordance with another aspect provide a lighting system, comprising:

a set of luminaires for mounting in an area;

a lighting controller; and

a system for determining the occupancy of workspaces as defined above, wherein the lighting controller is adapted to control the lighting in dependence on the determined occupancy.

By integrating the occupancy system into a lighting system, a shared infrastructure may be used. The lighting system may for example make use of a network, and the vision sensors and processor of the occupancy system may communicate using the same network. The vision sensors may for example be mounted at the luminaires.

The lighting controller may also control the lighting in response to changes in the workspace locations.

Examples in accordance with another aspect of the invention provide a method for identifying workspaces in an area and for determining the occupancy of the workspaces, comprising:

capturing an image or images of the area over time using a vision sensor arrangement;

using an image processor which is provided at the vision sensor, such that the vision sensor and the image processor form a vision sensor module, to determine occupancy information in respect of the area;

using a central processor to analyze the occupancy information over time thereby to:

identify workspace regions, based on the historical occupancy information; and

identify current occupancy of the workspace regions based on current or recent occupancy information.

The image processor may be used to generate an image which comprises a block pixel image of the area, each block pixel representing an area region and providing occupancy information, and using the occupancy processor (36) to identify workspace regions, based on the historical occupancy information which includes historical occupancy movement information within the area regions.

Quasi-static occupancy of a region and moving occupancy may be identified, and workspace regions may be identified based on historical quasi-static occupancy identifications.

The method may comprise sensing signals from portable devices of an occupant of the area and using the sensed signals to assist in the location of workspace regions.

The invention may be implemented at least in part in software.

BRIEF DESCRIPTION OF THE DRAWINGS

Examples of the invention will now be described in detail with reference to the accompanying drawings, in which:

Figure 1 shows a sequence of images to explain how the images are processed;

Figure 2 shows an occupancy request is processed;

Figure 3 shows a system for identifying workspaces in an area and for determining the occupancy of the workspaces;

Figure 4 shows a method for identifying workspaces in an area and for determining the occupancy of the workspaces; and

Figure 5 shows a computer for implementing the methods. DETAILED DESCRIPTION OF THE EMBODIMENTS

The invention provides a system and method for identifying workspaces in an area and for determining the occupancy of the workspaces. A vision sensor arrangement is used for capturing an image or images of the area over time. For the images, occupancy information over time in the area is analyzed so that workspace regions can be automatically recognized based on the historical occupancy information. The current occupancy of the workspace regions can be determined based on current or recent occupancy information.

Figure 1 shows three images of a workspace, as well as a simplified block representation of those images.

The image is of a workspace with multiple workspace locations, i.e. seats at a desk.

Figure 1 shows three sequential images. In Figure 1 A thee is a moving person 10 and a seated person 12. The seated person can be defined as quasi static. This means they are moving, for example their head and arms locally, but they are globally static, i.e. they physically occupy a fixed region of space which is not significantly larger than their own volume.

The right image shows a block pixel-by-block pixel score matrix that a vision sensor generates and sends to a central processing unit. The data is sent with a time stamp and with the identity of the particular vision sensor which captured the image. The image formed is termed below a "block pixel image".

The squares 14a and 14b represent block pixel regions of the block pixel image where movement is detected, and the amount of movement may also be encoded into the block pixel. The block pixels comprise image regions, correponding to an area region, i.e. a region of the area being monitored, and the vision sensor allocates these regions to the images, and determines the occupancy over time in each region. This is caused by movement in the area being monitored. For example locations can be identified with certain temporal characteristics, e.g. a small variance over time is indicative of workplaces in comparison to large variance over time that is indicative of movements in hallways.

In this way, changes in the image over time are used to identify regions where there are occupants. The degree of movement of those occupants over time then indicates if they are dynamic or quasi-static.

The block pixel image may for instance be a low resolution score matrix with each element encoding a value which is indicative of a size of observed movement. This movement information essentially provides occupancy information. For example, thresholds can be applied both in terms of the amount of movement within a region and the amount of time during which there is movement within the region.

The block pixel image is thus one way to encode discrete locations of the occupants at a particular time. Another way is to transmit location coordinates for a detected occupant.

A metric for the size of observed movement may be the variance between the location of an occupant over time. A large variance may relate to a person walking through a region or across multiple regions, whereas a small variance over a long time may relate to a person sitting in a region but moving by a small amount. Thus, quasi-static presence may be identified. This may be done by marking those blocks whose variances fall within a prescribed range.

The variance is for example is calculated over the last K reported locations. Let the location at the x-axis of the occupant at time t= k seconds be given by x(k), then the variance of the x-location over the last K seconds is:

1 I | 2

Var(x) = -^— j-∑ f c| |x(/<:)— mean(x) | | where: mean(x) = -∑fc (/c) .

Similar equations can be applied to the y-axis.

Then, the total variance of the location of the occupant can be given as Var(x) + Var(y).

This is an example choice of a statistical metric for estimating the size of the movement (i.e. velocity) of an occupant who is currently in a particular region.

The variance may be calculated at different locations. If the sensor reports locations at a high reporting frequency (e.g. 1 s), the variance may be computed at the backend. If the sensor reports at a lower frequency (e.g. 1 min), the sensor may then report location centroids as well as the variance.

There are various ways to record the amount of movement per image region. The use of a vision sensor provides much richer data than a PIR sensor but privacy is maintained by forming a block pixel image which only encodes levels of movement within regions.

Detecting the presence of an occupant may be carried out in various ways. Essentially, changes in images over time represent movement so that regions where there is movement can be identified and determined as containing an occupant. The movement of an occupant can then be tracked based on how the location of detected movement evolves over time.

Detecting movement may thus be based on changes in raw images over time, with analysis only on a region-by-region basis. More sophisticated image processing techniques may however be used (for example edge detection, head detection etc.) which may then be converted into the more coarse regional occupancy information to be used in the system of the invention. Thus, occupants may be identified based on image processing techniques, and again they may then be tracked. The velocity as well as position of a detected occupant can be tracked. By tracking a person moving between locations (block- pixels) over time, the type of movement can be classified. The person being tracked is not however identified.

Person tracking can be used to obtain velocity information. If velocity is not derived in this way, the system may also estimate the type of movement from the reported locations (block-pixels) and how these change over time. This could be based on associating locations to a particular person to maintain a continuous flow.

Note that the image regions may be defined in a static predefined way - so that the image is divided into a fixed set of regions. The regions may instead be somewhat dynamic, for example becoming defined by the detected occupancy over time. In this way, a region may be made to match more accurately the position of a workspace.

There are many known image processing techniques for identifying the location of people in a space, and any of these may be used. In all cases, the data is compressed into the more simple occupancy location information as explained above.

Figure IB shows a later image. The person 10 has moved so that the movement in the region 14a has dropped to zero and there is movement in region 14c.

In Figure 1C, there is a new person 16 at a different workspace and the standing person has left the room.

The block pixel image shows the newly used workspace as 14d. The block pixel images are analyzed, and block pixels which represent quasi- static presence are identified. This may be based on a number of observations of quasi-static presence within a pre-defined time window.

Based on large datasets collected over time, block pixel locations where a suitable metric exceeds a specified threshold are identified as potential workspaces.

In Figure 1, the region 14b is identified as a workspace, as shown by the dotted surround 18. Over time, the region 14d will also become recognized as a workspace, if it is used over time.

Thus, over time, the block pixel data is able to be used to identify workspace locations.

The vision sensors in this example only transmit location information of observed presence in the form of a block pixel-by-block pixel score matrix, and in particular the images indicate pixels with quasi-static presence. In this way, no personal information is collected.

By analyzing the movement within the area regions (i.e. the block pixels) over time and the movement between block pixels over time (i.e. one block pixels stops showing movement and an adjacent one shows movement), workspace regions can be identified, the movement of people can be tracked, and the occupancy of those identified workspace regions can be monitored. Thus, a single metric conveyed by each of the block pixels enables all of this information to be derived. This metric is the amount of movement over a preceding time period with the corresponding area region (where any movement is indicative of occupancy).

There are for example multiple vision sensors with possibly overlapping sensing areas. The sensing region of a vision sensor may either cover multiple workspaces, a workspace in part, or no workspace at all. The vision sensors can only send limited amount of information to the central processing unit on a rate-limited communication channel. The information elements sent by individual vision sensors conform to privacy and low communication rate constraints.

In real-time, a request for identifying vacant workspaces is sent to the system. The system then checks the vacancy of previously identified workspaces and sends such information to the querying entity.

This process is shown in Figure 2.

The image 20 is a current or recent block pixel image of movements identified as quasi-static. The image 22 shows where workspaces have been identified over a much longer time in the manner explained above. Four such workspaces are identified. As represented by the image 24, there are two unoccupied workspaces that are identified.

The current or recent change level information shown in image 20 may for example relate to movement over a previous number of seconds, minutes or hours, whereas the historical information used to form the workspace information represented by image 22 for example relates to previous days, weeks or even months.

The block pixel image means the amount of data that needs to be transmitted to and processed by the central processor is reduced. Each region for example corresponds in real space to a dimension of the same order of magnitude as the space occupied by a person (e.g. when viewed from above, the area of a person's head). Thus, the different regions are regions where a person may be seated. A person may occupy several such regions (i.e. they are smaller than the size of a person), or they may occupy a space smaller than corresponds to a region. The amount of data is still much less than for a high resolution pixelated image.

Figure 3 shows the system for identifying workspaces in an area and for determining the occupancy of the workspaces. A vision sensor arrangement is shown comprising two vision sensors, i.e. cameras 30, 32 for capturing an image or images of the area 34 over time.

Each vision sensor has a local image processor 30a, 32a adapted to allocate regions to the images as described above to form the block pixel image, and to determine a level of change of the images over time in each region. Output data 30b, 32b is provided in the form of change level information. This output information has no personally sensitive content.

The vision sensor module may be designed not to have a raw image output. In this way, the vision sensor module is designed so that no personally sensitive information can be output.

A central processor 36, which may be considered to be an occupancy processor, analyzes the occupancy information over time.

It identifies the workspace regions, based on the historical occupancy information and identifies the current occupancy of the workspace regions based on current or recent occupancy information. This is explained above.

The system does not need to be programmed with the workspace locations, but it can instead learn where they are. The amount of data that needs to be transmitted to and processed by the occupancy processor 36 is low. The occupancy processor 36 can be distributed for example such that the identifying of workspaces can be done on a different device than the identifying of current occupancy. For example, a central device identifies where the workspaces are and a more local processor receives information on these regions and then determines if they are free.

There may be more than two vision sensors. They do not need to be directed accurately as a result of the learning capability. They may have overlapping or non- overlapping fields of view. Since the system does not need to be programmed with the workspace locations, it may be that a field of view of one vision sensor covers no workspace regions or one or many.

As a further option, the system may comprise sensors for sensing signals from portable devices of an occupant of the area to assist in the location of workspace regions by correlating the sensed data with the vision sensor data. Received signal strength indication (RSSI) measurements at multiple receivers located for example at luminaires may be used to position a user mobile. Static/quasi-static positions are then filtered and spatio-temporally correlated with vision sensor pixel positions to identify potential workspaces. The sensor data may be based on RF signals or other electromagnetic signals or indeed other types of signal such as acoustic signals.

As hinted above, the vision sensors may be mounted at luminaires of a lighting system, so that a shared infrastructure may be used. The lighting system may be a networked system, and the vision sensors and processors of the occupancy system may communicate using the same network.

Figure 4 shows a method for identifying workspaces in an area and for determining the occupancy of the workspaces.

In step 40, an image or images of the area are captured over time. In step 42 regions are allocated to the images so that a block pixel layout is defined.

In step 44 occupancy information over time is determined within each region.

This occupancy information may take various forms, but it aims to identify regions where there is a person who is locally moving but globally relatively static, termed quasi static above. For example, this may arise when there is almost continuous detection of occupancy within the area over a time period, but the region is surrounded by regions which do not include occupants or motion. Occupancy information may take the form of information relating to the change of captured images over time, based on a change in the image captured corresponding to movement at that location. The amount of movement may be within thresholds to correspond to this quasi static condition. For example, for a system that does not track a user, a change in the image content below a threshold may simply be caused by changing ambient light conditions.

In step 46 output data is provided in the form of a block pixel image which encodes change level information. This output data include a timestamp and also identification of which vision sensor the image is for.

These steps all take place locally at the vision sensors.

In step 48, the occupancy information is analyzed over time in the central processor.

In step 50 workspace regions are identified, based on the historical occupancy information.

In step 52 current occupancy of the workspace regions is determined in response to an enquiry from a user of the system. The current occupancy is based on current or recent occupancy information.

The enquiry for example is based on a worker requesting a vacant workspace when entering an office. The request may be made using a mobile phone or by interacting with a terminal at the entrance to the building, or at the elevator on each floor of a building.

The system then has a request processor (which may in fact simply be a function of the central processor) which receives the request.

In step 54, a workspace is identified for allocation to the user which the user may occupy. This identification is carried out by the request processor.

The allocated workspace may be the nearest workspace to the user when they made the enquiry (for example based on the location of their mobile phone, or the terminal they used to log the enquiry). It may be set to be on the same floor as the enquiry location. Alternatively, a workspace in a currently least crowded area may be selected.

The user may have the option to choose his or her preferences for selection between any available workspaces, based on these parameters. Other parameters for example may include the proximity to a window, or heating or cooling systems, or even a preferred temperature.

The system is thus of interest for offices that use hot-desking, where users are not allocated fixed workspaces, but occupy them on a supply and demand basis.

The algorithms (both local and central) are implemented in software. For this purpose, a computer may be used to implement each processor. Figure 5 illustrates an example of a computer 60 for implementing the processors described above.

The computer 60 may include one or more processors 61, memory 62, and one or more I/O devices 63 that are communicatively coupled via a local interface (not shown). The local interface can be, for example but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface may have additional elements, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.

The processor 61 is a hardware device for executing software that can be stored in the memory 62. The processor 61 can be virtually any custom made or

commercially available processor, a central processing unit (CPU), a digital signal processor (DSP), or an auxiliary processor among several processors associated with the computer 60, and the processor 61 may be a semiconductor based microprocessor (in the form of a microchip) or a microprocessor.

The memory 62 can include any one or combination of volatile memory elements (e.g., random access memory (RAM), such as dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and non-volatile memory elements (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), tape, compact disc read only memory (CD-ROM), disk, diskette, cartridge, cassette or the like, etc.). Moreover, the memory 62 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 62 can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor 61.

The software in the memory 62 may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The software in the memory 62 includes a suitable operating system (O/S) 64, compiler 65, source code 66, and one or more applications 67 in accordance with exemplary embodiments.

The application 67 comprises numerous functional components such as computational units, logic, functional units, processes, operations, virtual entities, and/or modules. The operating system 64 controls the execution of computer programs, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services.

Application 67 may be a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program is usually translated via a compiler (such as the compiler 65), assembler, interpreter, or the like, which may or may not be included within the memory 62, so as to operate properly in connection with the operating system 64. Furthermore, the application 67 can be written as an object oriented programming language, which has classes of data and methods, or a procedure programming language, which has routines, subroutines, and/or functions, for example but not limited to, C, C++, C#, Pascal, BASIC, API calls, HTML, XHTML, XML, ASP scripts, JavaScript, FORTRAN, COBOL, Perl, Java, ADA, .NET, and the like.

The I/O devices 63 may include input devices such as, for example but not limited to, a mouse, keyboard, scanner, microphone, camera, etc. Furthermore, the I/O devices 67 may also include output devices, for example but not limited to a printer, display, etc. Finally, the I/O devices 63 may further include devices that communicate both inputs and outputs, for instance but not limited to, a NIC or modulator/demodulator (for accessing remote devices, other files, devices, systems, or a network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. The I/O devices 63 also include components for communicating over various networks, such as the Internet or intranet.

When the computer 60 is in operation, the processor 61 is configured to execute software stored within the memory 62, to communicate data to and from the memory 62, and to generally control operations of the computer 60 pursuant to the software. The application 67 and the operating system 64 are read, in whole or in part, by the processor 61, perhaps buffered within the processor 61, and then executed.

When the application 67 is implemented in software it should be noted that the application 67 can be stored on virtually any computer readable medium for use by or in connection with any computer related system or method. In the context of this document, a computer readable medium may be an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method.

The invention relates to privacy of the sensed image, and one aspect is the division of processing tasks as explained above. Another aspect is the use of the block pixel image. A second aspect relates to the image processing approach. This aspect provides a system for identifying workspaces in an area, for determining the occupancy of the workspaces and for allocating workspaces, comprising:

a vision sensor arrangement (30,32) for capturing an image or images of the area over time, wherein the image or images each comprise a block pixel image of the area, each block pixel representing an area region;

a processing arrangement (36, 30a,32a) adapted to:

determine occupancy information in each area region; and

analyze the occupancy information over time, to:

identify workspace regions, based on the historical occupancy information which includes historical occupancy movement information within each area region; and

identify current occupancy of the workspace regions based on current or recent occupancy information,

Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.