Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SENSOR CONFIGURATION BASED ON OTHER SENSOR CONTEXT DETERMINATION
Document Type and Number:
WIPO Patent Application WO/2020/169739
Kind Code:
A1
Abstract:
An apparatus comprising means for: obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison.

Inventors:
MATHUR AKHIL (GB)
MONTANARI ALESSANDRO (GB)
LEE SEUNGCHUL (GB)
MIN CHULHONG (GB)
Application Number:
PCT/EP2020/054494
Publication Date:
August 27, 2020
Filing Date:
February 20, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
A61B5/00; H04N5/232; G08B13/196; H04N7/18; H04N21/422
Domestic Patent References:
WO2016012865A22016-01-28
WO2016081778A12016-05-26
Foreign References:
US20160241768A12016-08-18
US20160150196A12016-05-26
US20180235538A12018-08-23
US20160306420A12016-10-20
US20070273504A12007-11-29
EP3419283A12018-12-26
US20100217098A12010-08-26
US20180070155A12018-03-08
Attorney, Agent or Firm:
NOKIA EPO REPRESENTATIVES (FI)
Download PDF:
Claims:
CLAIMS:

1. An apparatus comprising means for:

obtaining sensor data from at least one first, wearable, sensor;

comparing the first sensor data to a context parameter; and

causing at least one second sensor to be dynamically configured based on the comparison,

wherein:

the at least one second sensor has a higher cost than the at least one first sensor;

the dynamically configured at least one second sensor is operable to provide second sensor data, and wherein the second sensor data is more accurate than the first sensor data.

2. The apparatus of claim 1 , wherein the at least one second sensor is a camera and the first and second sensor data indicate the anxiety of a user.

3. The apparatus of claim 1 or claim 2 wherein the at least one second sensor is a camera, the first and second sensor data indicate a facial expression of a user, the first sensor data being a coarser-grain indication of the expression than the second sensor data.

4. The apparatus as claimed in claim 1 , wherein the first sensor data comprises at least one of:

at least one audio signal from at least one microphone;

at least one image from at least one camera;

at least one of displacement, velocity, acceleration from an inertial measurement sensor; and

at least one biometric value from a biometric sensor.

5. The apparatus as claimed in any of claims 1 to 4, wherein the means for comparing the first sensor data to a context parameter comprises at least one of: sensing modalities of the first sensor data;

feature extraction of the first sensor data; and

applying a classifier function to the first sensor data. 6. The apparatus as claimed in any of claims 1 to 5, the means for being further for obtaining the at least one context parameter, the context parameter defined by one or more values of the at least one sensor signal.

7. The apparatus as claimed in claim 6, wherein the means for causing the at least one second sensor to be dynamically configured based on the comparison is for generating a signal for causing the at least one second sensor to be dynamically configured based on the comparison of the one or more values of the at least one sensor signal to the at least one sensor signal from the at least one first sensor.

8. The apparatus as claimed in any of claims 1 to 7, wherein the means for comprises means for controlling the at least one first sensor to provide the at least one signal based on the at least one context parameter.

9. The apparatus as claimed in any of claims 1 to 6, wherein the apparatus comprises one of:

the at least one wearable sensor;

the at least one second sensor;

a personal sensor;

a controller for the at least one second sensor; and

an earbud sensor.

10. The apparatus as claimed in any of claims 1 to 9, wherein the at least one second sensor is a camera.

1 1 . The apparatus as claimed in claim 10, wherein causing at least one second sensor to be dynamically configured based on the comparison comprises is at least one of:

causing an activation/deactivation of the camera;

causing a change in an angle of direction of the camera;

causing a change in a field of view/zoom of the camera;

causing a change in a mode of operation of the camera;

causing a change in a shutter speed of the camera; and causing a change in a frame rate of the camera.

12. A method comprising:

obtaining first sensor data from at least one first, wearable, sensor, the first sensor data comprising at least one sensor signal;

comparing the first sensor data to a context parameter; and

causing at least one second sensor to be dynamically configured based on the comparison,

wherein:

the at least one second sensor has a higher cost than the at least one first sensor;

the dynamically configured at least one second sensor is operable to provide second sensor data, and wherein the second sensor data is more accurate than the first sensor data.

13. The method of claim 12, wherein the at least one second sensor is a camera and the first and second sensor data indicate the anxiety of a user.

14. The method of claim 12 or claim 13 wherein the at least one second sensor is a camera, the first and second sensor data indicate a facial expression of a user, the first sensor data being a coarser-grain indication of the expression than the second sensor data...

15. The method of any of claims 12 to 14, wherein the first sensor data comprises at least one of:

at least one audio signal from at least one microphone;

at least one image from at least one camera;

at least one of displacement, velocity, acceleration from an inertial measurement sensor; and

at least one biometric value from a biometric sensor.

16. The method as claimed in any of claims 12 to 15, wherein comparing the first sensor data to a context parameter comprises at least one of:

sensing modalities of the first sensor data; feature extraction of the first sensor data; and

applying a classifier function to the first sensor data.

17. The method as claimed in any of claims 12 to 16, further comprising obtaining the at least one context parameter, the context parameter defined by one or more values of the at least one sensor signal.

18. The method as claimed in claim 17, wherein causing the at least one second sensor to be dynamically configured based on the comparison further comprises generating a signal for causing the at least one second sensor to be dynamically configured based on the comparison of the one or more values of the at least one sensor signal to the at least one sensor signal from the at least one first sensor.

19. A computer program comprising instructions for causing an apparatus to perform at least the following:

obtaining first sensor data from at least one first, wearable, sensor, the first sensor data comprising at least one sensor signal;

comparing the first sensor data to a context parameter; and

causing at least one second sensor to be dynamically configured based on the comparison,

wherein:

the at least one second sensor has a higher cost than the at least one first sensor;

the dynamically configured at least one second sensor is operable to provide second sensor data, and wherein the second sensor data is more accurate than the first sensor data.

20. The computer program of claim 19, wherein the at least one second sensor is a camera and the first and second sensor data indicate the anxiety of a user.

21. The computer program of claim 19 or claim 20 wherein the at least one second sensor is a camera, the first and second sensor data indicate a facial expression of a user, the first sensor data being a coarser-grain indication of the expression than the second sensor data.

Description:
SENSOR CONFIGURATION BASED ON OTHER SENSOR CONTEXT

DETERMINATION

Field

The present application relates to apparatus and methods for sensor configuration based on other sensor context determination, for example, but not exclusively for camera sensor configuration based on wearable sensor context determination.

Background

Image and video processing technology is a key driver for a number of high- value applications in areas such as industrial automation, self-driving automobiles and public safety systems. To enable these applications, it is common to deploy one or multiple‘high-cost’ sensors. By‘high-cost’ the disclosure means sensors which require significant power and/or communication bandwidth and/or processing cost to operate. An example of such‘high-cost’ sensors is an image camera which may be configured to continuously record and analyse video streams of a scene to extract meaningful information. For instance, a camera mounted inside a self-driving car can be used to analyse the facial expressions of its passengers determine whether the passengers are experiencing anxiety or stress. This information can then be used to control the car’s self-driving (Al) system in order to attempt to reduce the stress or anxiety of the passenger, for example to control the car speed to slow the car. In a smart factory, image cameras can monitor the factory floor to look for anomalous events and provide feedback to a human operator or safety system.‘High-cost’ sensors may furthermore include sensors which have a limited designed usage lifespan or consume resources which may or may not be able to be replaced when in use. For example a‘high-cost’ sensor may be a chemical sensor which uses a chemical reagent to enable detection.

Summary

There is provided according to a first aspect an apparatus comprising means for:

obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison.

The sensor data may comprise at least one of: at least one audio signal from at least one microphone; at least one image from at least one camera; at least one of displacement, velocity, acceleration from an inertial measurement sensor; and at least one biometric value from a biometric sensor.

The means for comparing the sensor data to a context parameter may be further for at least one of: sensing modalities of the sensor data; feature extraction of the sensor data; and applying a classifier function to the sensor data.

The means for may be further for obtaining the at least one context parameter, the context parameter may be defined by one or more values of the at least one sensor signal.

The means for causing the at least one second sensor to be dynamically configured based on the comparison may be for generating a signal for causing the at least one second sensor to be dynamically configured based on the comparison of the one or more values of the at least one sensor signal to the at least one sensor signal from the at least one first sensor.

The means for may be for controlling the at least one first sensor to provide the at least one signal based on the at least one context parameter.

The apparatus may comprise one of: the at least one wearable sensor; the at least one second sensor; a personal sensor; a controller for the at least one second sensor; and an earbud sensor.

The at least one second sensor may be a camera.

The means for causing the at least one second sensor to be dynamically configured based on the comparison may be for at least one of: causing an activation/deactivation of the camera; causing a change in an angle of direction of the camera; causing a change in a field of view/zoom of the camera; causing a change in a mode of operation of the camera; causing a change in a shutter speed of the camera; and causing a change in a frame rate of the camera.

According to a second aspect there is provided an apparatus comprising means for: obtaining sensor data from at least one first sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison. According to a third aspect there is provided a method comprising: obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison.

The sensor data may comprise at least one of: at least one audio signal from at least one microphone; at least one image from at least one camera; at least one of displacement, velocity, acceleration from an inertial measurement sensor; and at least one biometric value from a biometric sensor.

Comparing the sensor data to a context parameter may comprise at least one of: sensing modalities of the sensor data; feature extraction of the sensor data; and applying a classifier function to the sensor data.

The method may further comprise obtaining the at least one context parameter, the context parameter may be defined by one or more values of the at least one sensor signal.

Causing the at least one second sensor to be dynamically configured based on the comparison may further comprise generating a signal for causing the at least one second sensor to be dynamically configured based on the comparison of the one or more values of the at least one sensor signal to the at least one sensor signal from the at least one first sensor.

The method may comprise controlling the at least one first sensor to provide the at least one signal based on the at least one context parameter.

The at least one second sensor may be a camera.

Causing the at least one second sensor to be dynamically configured based on the comparison may comprise at least one of: causing an activation/deactivation of the camera; causing a change in an angle of direction of the camera; causing a change in a field of view/zoom of the camera; causing a change in a mode of operation of the camera; causing a change in a shutter speed of the camera; and causing a change in a frame rate of the camera.

According to a fourth aspect there is provided a method comprising: obtaining sensor data from at least one first sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison. According to a fifth aspect there is provided an apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: obtain sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; compare the sensor data to a context parameter; and cause at least one second sensor to be dynamically configured based on the comparison.

The sensor data may comprise at least one of: at least one audio signal from at least one microphone; at least one image from at least one camera; at least one of displacement, velocity, acceleration from an inertial measurement sensor; and at least one biometric value from a biometric sensor.

Comparing the sensor data to a context parameter may further cause the apparatus to perform at least one of: sense modalities of the sensor data; feature extract of the sensor data; and apply a classifier function to the sensor data.

The apparatus may be further caused to obtain the at least one context parameter, the context parameter may be defined by one or more values of the at least one sensor signal.

Causing the at least one second sensor to be dynamically configured based on the comparison may further cause the apparatus to generate a signal for causing the at least one second sensor to be dynamically configured based on the comparison of the one or more values of the at least one sensor signal to the at least one sensor signal from the at least one first sensor.

The apparatus may further be caused to control the at least one first sensor to provide the at least one signal based on the at least one context parameter.

The apparatus may comprise one of: the at least one wearable sensor; the at least one second sensor; a personal sensor; a controller for the at least one second sensor; and an earbud sensor.

The at least one second sensor may be a camera.

Causing the at least one second sensor to be dynamically configured based on the comparison may cause the apparatus to perform at least one of: cause an activation/deactivation of the camera; cause a change in an angle of direction of the camera; cause a change in a field of view/zoom of the camera; cause a change in a mode of operation of the camera; cause a change in a shutter speed of the camera; and cause a change in a frame rate of the camera. According to a sixth aspect there is provided an apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: obtain sensor data from at least one first sensor, the sensor data comprising at least one sensor signal; compare the sensor data to a context parameter; and cause at least one second sensor to be dynamically configured based on the comparison.

According to a seventh aspect there is provided an apparatus comprising obtaining circuitry configured to obtain sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing circuitry configured to compare the sensor data to a context parameter; and controlling circuitry configured to cause at least one second sensor to be dynamically configured based on the comparison.

According to an eighth aspect there is provided a computer program comprising instructions [or a computer readable medium comprising program instructions] for causing an apparatus to perform at least the following: obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison.

According to a ninth aspect there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison.

According to a tenth aspect there is provided an apparatus comprising: means for obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; means for comparing the sensor data to a context parameter; and means for causing at least one second sensor to be dynamically configured based on the comparison.

According to an eleventh aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: obtaining sensor data from at least one first, wearable, sensor, the sensor data comprising at least one sensor signal; comparing the sensor data to a context parameter; and causing at least one second sensor to be dynamically configured based on the comparison.

An apparatus comprising means for performing the actions of the method as described above.

An apparatus configured to perform the actions of the method as described above.

A computer program comprising program instructions for causing a computer to perform the method as described above.

A computer program product stored on a medium may cause an apparatus to perform the method as described herein.

An electronic device may comprise apparatus as described herein.

A chipset may comprise apparatus as described herein.

Embodiments of the present application aim to address problems associated with the state of the art.

Summary of the Figures

For a better understanding of the present application, reference will now be made by way of example to the accompanying drawings in which:

Figure 1 shows schematically a system of apparatus suitable for implementing some embodiments;

Figure 2 shows a flow diagram of the operation of the system as shown in Figure 1 in some embodiments;

Figure 3 shows schematically a further system of apparatus suitable for implementing some embodiments;

Figure 4 shows schematically a first example system suitable for implementing some embodiments;

Figure 5 shows schematically a second example system of apparatus suitable for implementing some embodiments;

Figure 6 shows schematically a third example system of apparatus suitable for implementing some embodiments; and

Figure 7 shows an example device suitable for implementing the apparatus shown in the figures above. Embodiments of the Application

The following describes in further detail a suitable system of apparatus and possible mechanisms for the provision of efficient sensor configuration.

The concept may be exemplified by the use of a set (or type) of sensors. This set of sensors may be configurable sensors and may be‘high-cost’ sensors. The system may furthermore feature a further set (or type) of sensors. This further set of sensors may be sensors which provide data which is used to control the‘high-cost’ set of sensors. The further set (or type) of sensors may be‘low-cost’ sensors. A‘low-cost’ sensor may be one which has a lower resource consumption or requirement than the ‘high-cost’ sensor in terms of at least one resource. As such the terms‘low-cost’ and ‘high-cost’ are relative terms. For example a ‘low-cost’ sensor may still have an enormous cost so long as it is still lower than the‘high-cost’ sensor such that its use will still result in an overall lower use of whatever resource we are assessing for a‘cost’ parametric. Thus in the example of a limited lifespan sensor a‘low-cost’ sensor may be one which has a fixed and non-replaceable power source but has a designed lifespan which is longer than a‘high-cost’ sensor.

In some embodiments there can be no difference in price/processing/power consumption between the sets of sensors.

In some embodiments data from the sensor or sensors may be employed to reconfigure the reconfigurable sensor(s). This may be even where the (wearable) sensors from the further set have a greater associated cost factor or where there is no cost factor involved.

In some embodiments the examples allow the system to get the best data out of the reconfigurable sensor.

Examples of‘low-cost’ sensors or wearable sensors may be ear-worn devices such as earbuds. The sensors may be employed to dynamically ‘sense’ an environment and in some embodiments the users within the environment and their operating conditions, and use this information to adapt the operating parameters of the configurable or‘high-cost’ (for example camera) sensor system..

In the following examples the configurable or‘high-cost’ sensor is a camera system, for example a camera mounted on at least one adjustable gimbal.

Furthermore in the following examples the‘low-cost’ sensor is a wearable or personal sensor. The personal sensor may be an example of a sensor configured to monitor a user within their environment. The sensor may be located on or about a user. For example the sensor may be in direct contact with the user, for example a smart earbud located in the ear of the user. Furthermore in some embodiments the sensor may be affixed to clothing of the user, for example a sensor button worn on a shirt or jumper, or may be attached to a lanyard and worn. In some embodiments the sensor is one which is associated with the user and monitoring the user in their environment without necessarily being located on or worn by the user. For example a non-worn personal sensor may be a mini-drone comprising camera and microphones monitoring the user. The sensor may be configured to act as an external sensor which can anticipate the dynamic context of the environment (e.g., number of people, their current activities) and provide this feedback to the camera system which leads to a real-time reconfiguration of the camera parameters (e.g., field of view, zoom).

Furthermore in some embodiments the further set of sensors may be an assorted or diverse range of sensors. For example the further set of sensors can comprise a number of wearable (smart earbud) sensors which are augmented with other sensors including inertial measurement unit (IMU) sensors, audio sensors, and RF sensors.

In such a manner the example wearable sensors may be able to capture an individual’s context of interest and furthermore enable a more energy-efficient system to be implemented when compared to continuous camera recording and processing.

Additionally the embodiments as described hereafter can be employed to provide a more effective detection environment and thus not limited to camera-visible content. In other words users may be monitored out of the camera’s view but in locations which the camera may be configured to view, for example by repositioning the camera of changing a level of zoom.

With respect to Figure 1 is shown a schematic view of example apparatus implementing some embodiments. Figure 1 specifically shows a‘high-cost’ sensor in the form of a camera system 13. The camera system 13 has a series of configuration parameters 1 1 which control the operation of the camera system 13. Example configuration parameters 1 1 can be an activation/deactivation parameter 12 which causes the camera to be active or inactive, an angle (or direction) parameter 14 which controls the camera system to be orientated with a specific field of view, a mode of operation parameter 16 which controls the mode of operation of the camera (for example to control the output resolution of the images), a shutter speed parameter 18 configured to control the shutter speed of the camera system, a frames per second parameter 20 configured to control the number of image frames to be captured per second, and so on. Each of these configuration parameters can be dynamically configured or controlled based on receipt of trigger control messages 30 or other received trigger control signals.

In the example shown in Figure 1 the trigger control message or signals are generated directly by a suitable‘low-cost’ wearable sensor. In the example shown in Figure 1 there are shown three earbud (‘low-cost’) sensors 1 , 3, 5 configured to monitor a user context or user context parameter 21 and generate suitable trigger control messages based on the user context parameter 21 . Flowever any suitable number of wearable sensors may be employed in a system.

The user context parameters can be determined by the sensor based on one or more sensed parameters, such as pulse, blood pressure, eye-movement or other body related metrics, audio signals, displacement and/or velocity and/or acceleration sensed from the user by the wearable sensor. Examples of user context parameters 21 can be location 22, which determines the location or position of the user within the environment, and movement 24 which characterizes the movement (or activity) of the user. The location 22 and movement user context parameters can be determined based on the sensed displacement and/or velocity and/or acceleration. Other example context parameters 21 may be emotion 26, and mental status 28 which could for example be obtained based on the body related metrics.

These user context parameters 21 can then be used to generate the trigger control messages or signals 30.

This is shown for example with respect to Figure 2 where a flow diagram shows the operations of the system shown in Figure 1 .

Thus for example the earbud (or suitable wearable/low-cost) sensor is configured to obtain at least one user context parameter as shown in Figure 2 by step 101 .

Flaving determined at least one user context parameter then the system can be configured to obtain or determine any triggers based on the obtained at least one user contact parameter from at least one user as shown in Figure 2 by step 103.

Then the camera sensor system, having received the trigger message or signal is configured to dynamically configure the camera sensor based on the trigger message as shown in Figure 2 by step 105. In this example the obtaining (or generation) of the trigger message occurs within the wearable or personal sensor. However, in some embodiments, the obtaining of the trigger message can be performed elsewhere. For example in some embodiments the wearable sensors (‘low-cost’ sensors) communicate to the camera (‘high-cost’) sensor and the trigger messages are obtained internally within the camera (‘high-cost’) sensor and used to configure the camera (‘high-cost’) sensor.

Furthermore in some embodiments there may be management or control system apparatus which is configured to receive the sensor information from the (wearable) sensors and from this sensor information (or at least a filtered or processed form suitable for communication over a wireless communication link) the user context parameters and the trigger messages generated and passed to the (camera) configurable sensor.

For example Figure 3 shows schematically an example system of wearable sensors, camera sensors and a management or control system apparatus (which is physically separate from camera or earbud sensors) and suitable for implementing the dynamic control of the camera sensor according to some embodiments. It is understood that in some embodiments the management or control system apparatus functions can be implemented in either of, or distributed between, the wearable sensors and the camera sensor.

The management system apparatus 201 can in some embodiments comprise an application broker 219. The application broker 219 can be configured to register the rules of the camera operation. The rules in some embodiments can comprise 2 parts.

The first part of the rule is a user context of interest and the second part of the rule is a corresponding camera configuration or parameter configurations. The user context of interest can in some embodiments be considered to be a context parameter. The user context of interest or context parameter in some embodiments can be specific with respect to a user or a defined group of users (or sensors) or may be generally applicable to all users (or sensors). The context parameter may be defined by one of more sensor signals values. Furthermore the camera configuration may affect one or more camera parameters.

For example a rule may be to highlight a user“Tom” when they look tired, which triggers the camera to locate the user“Tom” within a field of view by changing the angle configuration parameter. In this example the context parameter is the determination of “Tom” looking tired which can be defined in terms of sensor data values representing a lack of motion of “Tom” or head motion sensor values indicating tiredness. The parameter configurations furthermore in this example would be changing the angle configuration parameter to bring“Tom” within the field of view. A further example could be to lower the frame rate per second when it is determined that nothing is moving within a defined area.

In some embodiments the management system apparatus 201 can comprise a rule translator 217. The rule translator 217 can be configured to separate the rule into suitable user contexts of interest and camera configurations.

In some embodiments the management system apparatus 201 can comprise a sensor interface 21 1 . The sensor interface 21 1 can be configured to operate as an interface between the management system apparatus and the wearable sensors. In some examples the sensor interface is known as an earbud broker configured to operate as a broker between the earbud sensor and management system apparatus. For example where the user contexts are obtained within the wearable sensors the sensor interface21 1 may be configured to receive the user contexts of interest from the rule translator 217 and pass them to the earbud sensors for the earbud sensors to monitor. Furthermore at runtime when the event or user context of interest is detected by the earbud then the sensor interface 21 1 may be configured to receive an indicator that the context of interest has been sensed.

In some embodiments the management system apparatus 201 can comprise a configuration scheduler 215 configured to receive via the sensor interface 21 1 any indicators from the earbud sensors that user context of interest have been detected. The indications from multiple and potentially all of the earbud sensors may be aggregated and then using the rule translator 217 and/or application broker 219 generate suitable camera configuration controls (the trigger messages).

The management system apparatus 201 can comprise a configurable sensor interface 213 configured to control the sending of the configuration control results to the camera 13. In some examples the configurable sensor interface 213 is known as a camera broker configured to operate as a broker between the camera and management system apparatus. The configuration control results can then dynamically change the camera configuration.

With respect to the earbud or wearable sensor 1 , the earbud can comprise a sensor broker 231 . The sensor broker 231 is configured as an interface with the sensor elements and activate suitable sensor modalities, receive the sensor streams and forward the sensed data to a sensing task processor 231 .

The earbud can furthermore comprise a system interface 235. The system interface235 (or system broker as it may also be known in some embodiments) can be configured as the interface with the management system apparatus 201 and configured to receive the user contexts of interest to monitor and pass these to the sensing task processor 231 .

Additionally the earbud can comprise a sensing task processor 233. The sensing task processor 233 is configured to receive the sensor data and the user contexts of interest and monitor whether the sensor data matches any of the user contexts of interest. The sensing task processor may implement this in any suitable manner and may comprise sensing modalities, feature extraction and a classifier.

Furthermore when the sensing task processor 233 determines a context of interest from the sensor data then this may be indicated to the system interface 235 which communicates this to the management system apparatus. In some embodiments the functionality of the earbud sensors, for example the sensing task processor, for at least some of the sensors may be offloaded to management system apparatus (or the camera where the functionality of the management system apparatus is implemented in the camera).

With respect to the camera 13 the camera can comprise a system interface 207. The system interface 207 (or system broker as it may also be known in some embodiments) is configured as an interface with the management system apparatus and can be configured to receive a list of configurations from the management system apparatus.

The camera 13 can furthermore comprise a camera configurator 209. The camera configurator can be configured to dynamically change the camera configuration based on a received message from the system broker 207.

Additionally the camera 13 can furthermore comprise a recording management 210. The recording management 210 is configured to manage the recording process.

With respect to Figures 4 to 6 is shown a series of practical implementation examples of the apparatus as shown in Figures 1 and 3.

Figure 4 for example shows a system within a factory, industrial or commercial areas 51 configured to monitor the health and safety of the workers or users within the area 51 . This for example may be particularly important in areas where employees or users use potentially lethal machines or substances and a prompt response to any accident is crucial.

In this example are shown a series of users for example users‘Annie’ 301 ,‘Bob’ 303,‘Carl’ 305,‘Doug’ 307, and‘Eric’ 309 within the area and equipped with a wearable sensor. Additionally within the area 51 there are located a number of cameras 31 1 , 313, 317, 319. These cameras can be configured to dynamically operate based on the trigger messages or signals received from the wearable sensors. Thus for example as shown in Figure 4 the camera 313 has a first field of view 314 and can view user‘Bob’ 303 but not‘Carl’ 305. However a trigger event generated by the wearable sensor on ‘Carl’ 305 causes the camera 313 to change its field of view shown by arrow 315 such that the camera 313 can now capture an image in which‘Carl’ 305 is visible and can be monitored by the camera.

In such a manner the system shows a way to unobtrusively monitor the workers and be able to trigger a re-configuration of the cameras in the factory to focus on a specific subject or more than one subject simultaneously. In this context, firstly in the application broker 319 the configuration rules can be specified. Example rules may be specific as discussed above for a user. For example‘focus on carl when ‘Carl’ is working with the power hammer and is showing drowsiness’. Some of the rules could be generic. For example‘zoom-in on anyone who looks dizzy’.

The wearable sensor can in some embodiments be an earbud sensor used to detect the exact user context. For example the earbud can comprise an inertial sensor for motion sensing and microphone for audio sensing. Using this sensor data the earbud can be configured to detect a power hammer by analysing the audio signals (for example through machine learning models). Furthermore the motion sensing can be used to detect drowsiness or dizziness by sensing the motion of a user’s head.

The rules can be created with any combination of the contexts detected by the earbuds and the configurations allowed by the cameras.

In some embodiments rules can be chained or combined with one another using suitable logical operators (in other words use of operators AND, OR etc).

Furthermore the location of the user, necessary to move the cameras in the appropriate direction, could be obtained by triangulating the radio beacons emitted by the earbud using static receivers in the area.

In such a manner the embodiments may contribute in that it can be possible to reduce the number of cameras deployed in a factory floor/area given that the available ones can be dynamically re-purposed to cover different areas. Also there may be improvements in that there are potential savings in network bandwidth and power consumption since the cameras can stay in an idle mode, without streaming any data, until an event is detected by the earbuds.

With respect to Figure 5 there is shown in a second example of a practical implementation within which the apparatus shown in Figures 1 and 3 may be implemented. The example shows the interior of an autonomous vehicle 401. The autonomous vehicle 401 comprises a camera 470 configured to monitor the interior of the vehicle. The camera 470 is wirelessly coupled via a management system 460.

The autonomous vehicle 401 furthermore comprises the management system 460 which is configured to control the camera 470 and may also control the autonomous vehicle 401.

Within the autonomous vehicle 401 can be located passengers or users. In the example shown in Figure 5 there are five users and where each of the users are equipped with a wearable or personal sensors (for example a earbud) which is wirelessly coupled to the management system 460 (for example links 443 and 453). Thus for example there may be a first user 410 with a first sensor 412, a second user 420 with a second sensor 422, a third user 430 with a third sensor 432, a fourth user 440 with a fourth sensor 442 and a fifth user 450 with a fifth sensor 452.

The camera 470 can be dynamically configured with a field of view to monitor the users. These images may then be used by the management system 460 to determine whether or not any of the users are experiencing stress and control the vehicle accordingly.

Thus for example as shown in Figure 5 the camera 470 may initially be focusing with a first field of view 471 to view the fourth user 440. Flowever the fifth user 450 with the fifth sensor 452 could be experiencing signs of anxiety which are detected by the sensor 452 (for example an elevated heart rate causing a user context parameter of interest to be detected), which is communicated via link 453 to the management system 460 which identifies the camera configuration required based on the stored rules to enable a reconfiguration of the camera to have a field of view 471 to view the fifth user as shown by the field of view 473. The camera can then be used to analyse the image containing the fifth user and determine whether or not the image contains any signs of anxiety of the user and change the driving profile of the autonomous vehicle accordingly. In such a manner the facial expressions of the users within the vehicle can provide feedback to the vehicle Al system for self-adaptation., for e.g., if a passenger is looking anxious about the vehicle’s Al-assisted driving, the vehicle can give explanations about its decision-making process. Alternately, it can play soothing music to calm the user or perform another user-calming function.

In addition to the above examples of providing critical real-time feedback, facial expression detection can also be used to personalize the in-vehicle user experience (e.g., to select a music playlist based on a user’s current emotion).

In some embodiments the earbud or wearable sensor worn by the users can be configured to detect coarse-level facial expressions, either using inertial sensors based on the movement of the head or jaw, or using the microphone. Once a coarse-grained expression is detected by the earbud of a specific user, this trigger triggers a reconfiguration of the vehicle camera, in other words the camera can be rotated, zoomed, and or focussed to place an emphasis or concentrate on that specific user so that a fine-grained facial expression detection operation can be performed.

A benefit of this approach is that as inertial sensors are much cheaper than a camera from an energy-perspective, they are able to continuously analyse and detect ‘coarse-grained’ facial expressions or anxiety levels of all the passengers in the vehicle. As a result, the energy and bandwidth hungry camera is used sparingly.

With respect to Figure 6 a third example is shown whereby the working environment 501 comprises a smart factory production line 540 with a series of machines 510, 520, 530 and their operators‘Fred’ 550, and‘Greg’ 560. The operators can equipped with suitable wearable or personal sensors 551 (associated with‘Fred’ 550) and sensor 561 (associated with‘Greg’ 560).

Within the environment there furthermore may be located one or more camera sensors such as shown by the camera sensors 570 and 580. The camera sensors may be dynamically configurable.

Furthermore there may be a management system apparatus 580 in communication with the wearable sensors and the camera.

The example shown in Figure 6 shows a machine 520 about to malfunction (for example generating an abnormal noise 521 ). The noise as a user context of interest can be detected by one or more of the wearable sensors 551 , 561 . When an industrial machine is operating it emits specific sounds which are different from the sounds emitted when the operation is not nominal or is about to fail. Using audio-based machine learning models running on the earbuds, an event could be determined when the machine is not producing the nominal sounds. This event can be indicated to the management system 580 and cause a reconfiguration of the camera 570 to view the machine 520 as shown by the dashed line 571 when needed. The images captured by the camera 570 can then be used to determine the potential fault with the machine and therefore may additionally cause a suitable industrial control message to be generated stopping the process, diverting the process to a working machine or alerting users to the potential fault.

In such a manner the microphone on the earbud sensor could be used to infer the status of machines around a user, and cause the cameras to aim in that direction to have a better understanding of what is happening. For example, if a machine malfunction is detected through its audio signature, the system can be configured to focus multiple cameras on this machine, so that a human observer can have a detailed look to determine the potential or actual problem.

In some embodiments the contexts detectable by the earbuds can include also circumstances related to processes happening around the user wearing the earbuds even if where the user is not directly involved (in other words being able to detect faults or potential faults in machines other than the one the user is operating or overseeing).

As such these embodiments allow the monitoring of subtle machine operations which could be very difficult to be monitored by humans, in a non-invasive way in simply leveraging the sound emitted from the machine.

With respect to Figure 7 an example electronic device which may be used as one of the sensors or management system apparatus is shown. The device may be any suitable electronics device or apparatus.

In some embodiments the device 1400 comprises at least one processor or central processing unit 1407. The processor 1407 can be configured to execute various program codes such as the methods such as described herein.

In some embodiments the device 1400 comprises a memory 141 1 . In some embodiments the at least one processor 1407 is coupled to the memory 141 1 . The memory 141 1 can be any suitable storage means. In some embodiments the memory 141 1 comprises a program code section for storing program codes implementable upon the processor 1407. Furthermore in some embodiments the memory 141 1 can further comprise a stored data section for storing data, for example data that has been processed or to be processed in accordance with the embodiments as described herein. The implemented program code stored within the program code section and the data stored within the stored data section can be retrieved by the processor 1407 whenever needed via the memory-processor coupling.

In some embodiments the device 1400 comprises a user interface 1405. The user interface 1405 can be coupled in some embodiments to the processor 1407. In some embodiments the processor 1407 can control the operation of the user interface 1405 and receive inputs from the user interface 1405. In some embodiments the user interface 1405 can enable a user to input commands to the device 1400, for example via a keypad/touch interface. In some embodiments the user interface 1405 can enable the user to obtain information from the device 1400. For example the user interface 1405 may comprise a display configured to display information from the device 1400 to the user. The user interface 1405 can in some embodiments comprise a touch screen or touch interface capable of both enabling information to be entered to the device 1400 and further displaying information to the user of the device 1400.

In some embodiments the device 1400 comprises an input/output port 1409. The input/output port 1409 in some embodiments comprises a transceiver. The transceiver in such embodiments can be coupled to the processor 1407 and configured to enable a communication with other apparatus or electronic devices, for example via a wireless communications network. The transceiver or any suitable transceiver or transmitter and/or receiver means can in some embodiments be configured to communicate with other electronic devices or apparatus via a wire or wired coupling.

The transceiver can communicate with further apparatus by any suitable known communications protocol. For example in some embodiments the transceiver can use a suitable universal mobile telecommunications system (UMTS) protocol, a wireless local area network (WLAN) protocol such as for example IEEE 802. X, a suitable short- range radio frequency communication protocol such as Bluetooth, or infrared data communication pathway (IRDA).

The transceiver input/output port 1409 may be configured to receive the signals and in some embodiments determine the parameters as described herein by using the processor 1407 executing suitable code.

These apparatus and mechanisms as discussed above may have the advantage in being able to use a first set of sensors (which may be‘low-cost’) in dynamically adapting the operating parameters of configurable sensors (which may be ‘high-cost’) operating within industrial systems. For example causing dynamic adaptation within a camera-based industrial systems in order to save resources such as network bandwidth and energy consumption. Additionally some embodiments can be employed to increase coverage or effectiveness of the system, for example extending scene coverage, and/or improve the data captured by the system.

Some embodiments may have advantages over sensor systems in that they do not require the deployment of large numbers of configurable sensors, such as image camera-based systems, and are therefore cheaper and more efficient to implement and reduce the amount of resources required to operate the system.

Additionally some embodiments are able to flexibly and dynamically configure the system (such as for a camera: field of view, focus, zoom, resolution). This has a direct improvement on the accuracy and range of data provided by the configurable sensor and therefore may improve the effectiveness of any control system based on the output of the configurable sensor. For example in video processing applications where the camera is within a self-driving car with multiple passengers, the camera can be configured to decide on which passenger to focus on as it may not be possible to focus on all of the passengers at the same time.

In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.

The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.

The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.

Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.

The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.