Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VERIFYING THE IDENTITY OF A SENSOR AND/OR DEVICE
Document Type and Number:
WIPO Patent Application WO/2023/203312
Kind Code:
A1
Abstract:
There is disclosed a method for identifying a sensor. The method comprises: obtaining calibration data from the sensor during a self-calibration procedure of the sensor (e.g. at start-up or during run- time); and verifying the identity of the sensor based on the obtained calibration data and previously stored data corresponding to previously obtained calibration data. The method may further comprise: quantising the calibration data; and generating a bit sequence based on the quantised data. Quantising the calibration data may comprises generating binary values representing respective samples of the calibration data, and generating the bit sequence may comprise concatenating the binary values. Quantising the calibration data may comprise: dividing the calibration data into N portions; and quantizing each of the N portions of calibration data independently.

Inventors:
MEHRNEZHAD MARYAM (GB)
GRAY DANTE (GB)
Application Number:
PCT/GB2023/050949
Publication Date:
October 26, 2023
Filing Date:
April 11, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV NEWCASTLE (GB)
International Classes:
H04W12/06; H04W12/60; G01C25/00
Foreign References:
US20150347607A12015-12-03
CN207731140U2018-08-14
US20200333435A12020-10-22
Attorney, Agent or Firm:
HGF LIMITED (GB)
Download PDF:
Claims:
Claims 1. A method for identifying a sensor, the method comprising: obtaining calibration data from the sensor during a self-calibration procedure of the sensor (e.g. at start-up or during run-time); and verifying the identity of the sensor based on the obtained calibration data and previously stored data corresponding to previously obtained calibration data. 2. A method according to claim 1, wherein the method further comprises: quantising the calibration data; and generating a bit sequence (e.g. sample fingerprint) based on the quantised data. 3. A method according to claim 2, wherein quantising the calibration data comprises generating binary values representing respective samples of the calibration data, and wherein generating the bit sequence comprises concatenating the binary values or downsampled binary values. 4. A method according to claim 2 or 3, wherein quantising the calibration data comprises: dividing the calibration data into N portions; and quantizing each of the N portions of calibration data independently. 5. A method according to claim 4, wherein the quantisation intervals for each portion of calibration data are set based on: a minimum value of the quantisation data within each portion; a maximum value of the quantisation data within each portion; and a number of quantisation levels. 6. A method according to claim 5, wherein the number of quantisation levels are the same for each portion of the calibration data.

7. A method according to any preceding claim, further comprising pre-processing the calibration data. 8. A method according to claim 7, wherein pre-processing comprises one or more of: combining two or more separate components of the calibration data (e.g. x, y, z components); filtering the calibration data (e.g. to remove noise and/or spikes); calculating a function (e.g. first derivative) of the calibration data. 9. A method according to any receding claim, wherein verifying the identity of the sensor comprises verifying whether the difference between the obtained calibration data and the previously obtained calibration data is less than a certain threshold. 10. A method according to any preceding claim, wherein the sensor is a motion sensor. 11. An apparatus for identifying a sensor, the apparatus being configured to: obtain calibration data from the sensor during a self-calibration procedure of the sensor (e.g. at start-up or during run-time); and verify the identity of the sensor based on the obtained calibration data and previously stored data corresponding to previously obtained calibration data. 12. A computer program comprising instructions which, when the program is executed by a computer or processor, cause the computer or processor to carry out a method according to any of claims 1 to 10. 13. A computer or processor-readable data carrier having stored thereon a computer program according to claim 12.

Description:
Verifying the Identity of a Sensor and/or Device BACKGROUND Field Certain examples of the present disclosure provide one or more techniques for identifying a sensor and/or a device including a sensor. Certain examples of the present disclosure provide one or more techniques for verifying the identity of a sensor and/or a device including a sensor. In certain examples, the sensor or device may be an Internet of Things (IoT) sensor or device. Description of the Related Art The sensor industry has been booming over the last decade with the rapid growth of IoT devices and applications. According to the Business Insider 2020 IoT report, it is projected that from 2019 to 2027, the number of IoT devices will increase from 8 billion to 41 billion. Modern technologies are rapidly adopting various forms of sensors including biometric (e.g., fingerprint, iris), communicational (e.g., Bluetooth, NFC), motion (e.g., accelerometer, gyroscope), ambient (e.g., temperature, air). While these sensors may enrich aspects of our lives, they come with their own challenges. Challenges include security and privacy attacks such as sensor streams forgery and safety issues such as automated decision making based on impaired data from a broken sensor. IoT and mobile devices share similarities with the types of sensors used and the MEMS technology behind them making existing mobile-based sensor attacks transferable to IoT platforms. Many sensors found in modern IoT devices are underpinned on MEMS technology, which uses microfabrication to emulate the mechanical parts found in their optical counterparts. The primary advantage of using MEMS-based sensors over their optical counterparts is the reduction of cost allowing them to be used in IoT devices whilst retaining a low price. The reduction of cost is a trade- off with accuracy, though it remains sufficient for the use cases of this technology. The reduction in accuracy can be partially owed to errors introduced during the manufacturing process of these components, as well as the technology being susceptible to environmental factors such as temperature, acoustic noise, and ageing. Internal errors are broken down into deterministic errors: such as bias, scaling factor and misalignment errors. And, non-deterministic errors: fluctuations in system response, scale factor, and bias drifts. The deterministic errors are usually addressed by factory calibration. To further guarantee the accuracy of readings, certain sensors undergo an additional self-calibration process at start-up. This process compensates for any physical changes that may have occurred since manufacturing that would affect the accuracy of readings. This runtime calibration process produces a sequence of sensor data. Regulation of sensors in connected environments where decision making is fully or partially based on their data is vital. Such environments are open to various threats such as malfunctioning nodes sending erroneous data, and malicious nodes masquerading as legitimate nodes. The latter can lead to passive attacks such as eavesdropping, allowing an attacker to infer information about or the habits of a sensor-enabled environment (e.g. wearables and smart homes) that the user would otherwise not want to be known. It can also lead to active attacks such as the Greedy Behaviour Attack, a form of DoS attack, where a malicious node falsifies its Carrier-Sense Multiple-Access with Collision Avoidance (CSMA-CA) parameters to increase its chance of permanently accessing the transmission channel. Sensing technologies are not regulated consistently across various sectors. More specifically, access to sensor data is open to developers. This becomes a more serious security concern since sensor-enabled IoT devices might not have any input methods (e.g., screen, keyboard, etc.) to be utilised for security purposes such as pairing and authentication. Consequences of having compromised and/or broken sensors, and the necessity for effective protection mechanisms (e.g., identification/authentication) can be observed in Industrial IoT (IIOT) where the data received from sensors result in physical real-world actions. As such, it’s vital to ensure that the data received is from legitimate nodes, and accurate. False Data Injection Attacks, arising from the manipulation of sensor data, have been responsible for catastrophes such as the 2008 oil explosion in Turkey and 2015 blackout affecting 225k customers in Ukraine. Sensors on mobile devices (phones and tablets) are increasing in number and variety, as well as spreading to other technologies such as wearables, smart homes, and other IoT infrastructures. Reportedly, there are more than 30 sensors on off-the-shelf mobile phones. These sensors fall under different categories: biometric, communicational, motion, and ambient sensors. While the first two categories are generally better protected and considered as OS’s resources, the latter ones are mostly left without any safeguarding measures. IoT devices and environments are less standardised and coherent than other platforms such as PC and mobile. This is despite their growth in use of various forms of sensors. App, website, and IoT developers can access motion and ambient sensors on smart devices without any permission from the users or even notifying them. Only some forms of these sensors in combination to other sensors require user permission e.g. ‘Physical activity’ on Android which reports activities such as walking, biking, driving, step count, etc. Apart from mobile app and web programming, another way of programming sensors is via IoT devices. Various companies have been offering sensor solutions to developers. Examples include Bosch XDK and Nordic Thingy. In addition, discovering various sensor enabled IoT devices and access to such data values are possible via IoT search engines such as shodan.io and thingful.net. Most of these IoT sensors can be accessed and managed on the user’s mobile phone either via an app or within a browser. Hence, the entire space needs to be regulated to mitigate risks. Although access to ambient sensors is possible across various platforms, their definitions, categorisation, and the technical details of sensor access (e.g. when sensor value changes, or on particular frequencies) varies across platforms and is not standardised. This creates more complexity when it comes to managing their privacy and security. Not having standard practices for accessing sensors enables various opportunities including side-channel attacks, tracking, and fingerprinting. Sensor fingerprinting is the process of identifying a sensor’s unique intrinsic or behavioural properties for the purpose of identification, and can directly contribute towards device fingerprinting. Whilst device fingerprinting can bring a lot of benefits such as increasing security and providing an enhanced user experience, there remains a potential for security and privacy risks such as inferring user behaviour which is particularly invasive in certain environments e.g. a smart home environment. Creating an effective generic fingerprinting mechanism to work across multiple devices within platforms such as PC and mobile is relatively easy due to the consistency across these devices, such as the browsers and operating systems used. This proves to not be so in the IoT world with each IoT device operating in its own way with its own assortment of sensors. There are several challenges that make IoT devices vulnerable to various forms of attacks including being lightweight, and not having any input devices (e.g. tough screen or monitor) connected to them. The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure. SUMMARY It is an aim of certain examples of the present disclosure to address, solve, mitigate or obviate, at least partly, at least one of the problems and/or disadvantages associated with the related art, for example at least one of the problems and/or disadvantages mentioned herein. Certain examples of the present disclosure aim to provide at least one advantage over the related art, for example at least one of the advantages mentioned herein. The present invention is defined in the independent claims. Advantageous features are defined in the dependent claims. Embodiments, aspects or examples disclosed in the description and/or figures falling outside the scope of the claims are to be understood as examples useful for understanding the present invention. Other aspects, advantages, and salient features of the present disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the accompanying drawings, disclose examples of the present disclosure. BRIEF DESCRIPTION OF THE FIGURES Figure 1a illustrates an exemplary sequence of sensor data produced during a runtime calibration process (in the exemplary case that the sensor is a MPU-6050 gyroscope); Figure 1b illustrates the sequence of sensor data of Figure 1a following pre-processing of the data; Figure 2 illustrates a sequence diagram of an exemplary technique for registering a fingerprint; Figure 3 illustrates a sequence diagram of an exemplary technique for verifying the identity of a sensor or device using a registered fingerprint; Figure 4 illustrates an exemplary technique for quantising data; Figure 5 is a block diagram of an exemplary apparatus for registering a fingerprint and/or verifying the identity of a sensor using a fingerprint; and Figure 6 illustrates an exemplary environment in which examples of the present disclosure may be implemented. DETAILED DESCRIPTION The following description of examples of the present disclosure, with reference to the accompanying drawings, is provided to assist in a comprehensive understanding of the present invention, as defined by the claims. The description includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the examples described herein can be made. Certain examples of the present disclosure provide one or more techniques for identifying a sensor and/or a device including one or more sensors. Certain examples of the present disclosure provide one or more techniques for verifying the identity of a sensor and/or a device including one or more sensors. In certain examples, the identity of a device may be considered to be verified by verifying the identity of one or more sensors included in the device. In the present disclosure, when referring to identification, verification, fingerprinting, and the like, the term “device” may be used to refer to either a sensor or a device including one or more sensors. In the former case, the terms “device” and “sensor” herein may be used interchangeably. In the latter case, a distinction may be made between “sensor” and “device”. In certain non-limiting examples, a “sensor” may be regarded as a piece of hardware equipment with no or relatively limited computational capabilities. A sensor may reside in a device with relatively limited or relatively powerful computational capabilities. A device may comprise one sensor or multiple sensors. The skilled person will appreciate that the present disclosure encompasses all of these interpretations. The identity of a sensor and/or a device comprising a sensor may be verified based on sensor calibration data (i.e. the output of the sensor during a calibration procedure). Information derived from sensor calibration data may be stored, possibly along with other data, and the stored data may be used to subsequently verify the identity of the sensor and/or device. Such information may be referred to as a “fingerprint” of the device and/or the sensor of the device. In certain examples, the sensor or device may be an Internet of Things (IoT) sensor or device. However, the present disclosure is not limited to this example. For example, the techniques described herein may be applied to any suitable type of sensor or device, for example any suitable type of sensor or device that performs a calibration procedure that generates unique calibration data as a result. Various examples of the present disclosure take advantage of the plentiful resource of one or more sensors, for example in IoT environments, to fingerprint them (and the device(s) they reside in) using the unique characteristics of the sensors’ output alone. Such fingerprints may provide means, or extra means, of authentication, identification and/or management of sensors (detecting malfunctioning sensors for example). Certain examples of the present disclosure may verify the identity of individual sensors (and as a result the authenticity and integrity of the data) using only sensor calibration data (e.g. motion sensor calibration data), for example without the explicit usage of key material. As noted above, the techniques described herein may be applied to any suitable type of sensor and/or device. Also, in various examples, the techniques described herein may be capable of performing identification at the individual sensor level (e.g. to distinguish one sensor from another, even between sensors of the same model, provided by the same vendor and/or manufactured by the same manufacturer), at the sensor model level (e.g. to distinguish between sensors having different models, even between sensors of the same type, for example gyroscope), and/or at the sensor vendor/manufacturer level (e.g. to distinguish between sensors provided by different vendors and/or manufactured by different manufacturers). For example, the techniques described herein may be applied to a sensor that is part of an IoT device found in an IoT environment (e.g. a smart home, building, or factory). The sensor/device may be placed in a fixed location and left to routinely perform its tasks without any physical interference. For example, in the context of IIoT, such devices could include those utilising: temperature sensors (to ensure food does not go above a set temperature), gas sensors (to detect when unwanted gas leaks into an area jeopardising the safety of the crew), and proximity sensors (to alert crew operating machinery that they are approaching a wall or other structure). In certain examples, the sensor may be an MPU-6050 sensor, which is a 6-axis motion tracking device (sensor) designed for the low power, low cost, and high performance requirements of smartphones, tablets, wearables, and other IoT sensors. The MPU-6xxx series has been featured in Apple’s iPhone, BlurFree technology for video and image stabilisation, and AirSign technology. A 6-axis motion tracking sensor may be capable of measuring three degrees of flat movement for each axis, and three degrees of rotational movement actors for each axis (Yaw, Pitch, Roll). The skilled person will appreciate that the present disclosure is not limited to the particular models or types of sensor mentioned above. As will be described in further detail below, the use of sensor calibration data to identify a sensor involves two processes. First, a sensor fingerprint may be generated and registered. Second, the registered fingerprint may be used to verify the identity of the sensor. Figure 2 illustrates a sequence diagram of an exemplary technique for registering a fingerprint. Figure 3 illustrates a sequence diagram of an exemplary technique for verifying the identity of a sensor or device using a registered fingerprint. Each of these processes involves various operations including sensor calibration data collection, pre-processing and quantisation. Examples of these operation will now be described. Sensor Calibration Data Collection In each process (registration and verification), sensor calibration data is collected. An exemplary technique for data collection will now be described. Sensor calibration data may refer to the output of a particular sensor whilst it undergoes a calibration procedure, for example a self-calibration process at runtime. The sensor output may comprise a set of values (e.g. numerical values) corresponding to samples measured by the sensor. The sensor may measure any suitable quantity (e.g. temperature, humidity, motion, etc.). The output of the sensor may comprise samples taken over any suitable period of time (e.g. the time required to complete the calibration procedure). The output of the sensor may be provided in any suitable format and/or be encoded in any suitable way. The sensor may be configured to apply any suitable sampling period (i.e. time difference between two consecutive samples) or sampling rate (i.e. number of samples per second). For example, the sampling rate may be around 30-35 samples per second. The skilled person will appreciate that the present disclosure is not limited to this example. In addition, for the purpose of generating a fingerprint, any suitable number of samples/datapoints may be collected and/or samples/datapoints may be collected over any suitable time period. For example, 1000 samples may be collected for generating a fingerprint and/or samples collected over a time period of 10 to 20 seconds may be used for generating a fingerprint. The skilled person will appreciate that the present disclosure is not limited to these examples. The number of datapoints may be selected as the number of sensor outputs required for the calibration process to finish. In certain examples, results may vary greatly depending on the number of datapoints used for fingerprint generation. Using fewer than the ‘optimal’ number of datapoints may reduce the amount of entropy. Conversely, using more than the ‘optimal’ number of datapoints may introduce noise in the form of irrelevant and un-fingerprintable data (stable post-calibration sensor readings) reducing the entropy contained within a fingerprint, and the performance of any security measures centred around this approach. In some examples, the most significant adjustments and highest entropy may occur in only a portion of the datapoints, for example approximately the first half of the datapoints collected during the time required to complete the calibration procedure. In certain examples a set of datapoints having the highest entropy may be selected to form a fingerprint. In certain examples, the sensor may be stationary during its calibration. In this case, the values output by the sensor may change rapidly and consistently across multiple calibrations. This rapid change of values is illustrated in Figure 1, which illustrates an exemplary sequence of sensor data produced during a runtime calibration process (in the exemplary case that the sensor is a MPU- 6050 gyroscope). To acquire sensor sequences (or calibration data), the sensor’s calibration process may be triggered. In some examples, this may be done automatically by the sensor each time it is powered up. To trigger the calibration process, a suitable controller may be configured to briefly cut power to the peripherals triggering the sensor’s calibration process. Sensor readings may then be recorded until a suitable number of samples have been collected and/or samples from a suitable time period have been collected. The sensor output may be saved directly onto an SD card or any other suitable recording medium for example. In certain examples, the above process may be repeated two or more times for the same sensor. As described further below, the resulting two or more sets of sensor data may be combined (e.g. by averaging) to generate a combined set of sensor data. The above process may be performed for each sensor for which a fingerprint is to be generated. In certain examples, after data collection, the raw sensor data may be converted into any suitable format for subsequent data processing. As discussed further below, this may include one or more of pre-processing, quantisation, data visualisation, and fingerprint similarity evaluation process, for example using Hamming Distances. Pre-Processing In certain examples, to improve and ensure the quality of the collected data, the collected sensor sequences may be subjected to one or more stages of pre-processing, some examples of which are described below. In certain examples, filtering may be applied to the sensor output data to reduce or remove noise in the data. For example, such filtering may be used to remove spikes in the data. Figure 1b illustrates the sequence of sensor data of Figure 1a following pre-processing of the data in the form of filtering. Any suitable type of filtering may be used. In certain examples, the Median Filtering function of Matlab, as defined in Equation 1 below, may be used to remove noise from the data (unfilteredData) resulting in a smoother signal (FilteredData). Median Filtering is a non-linear digital filtering technique for removing noise from a signal by replacing an entry (a single datapoint) with the median value of neighbouring entries. ) In Equation 1, the second argument “order” determines the intensity of the filtering process. The higher the order, the more noise that is removed from a given signal and the further the original signal is altered. In certain examples, orders higher than 13 may not result in a significant reduction of noise and only further distort the signal, while orders lower than 13 may result in unaddressed spikes. For this reason, an order of 13 may be used in certain examples. In certain examples, the sensor data may comprise two or more components. For example, raw gyroscope data may be separated into three axes (x, y, and z). In certain examples, multiple components of data may be combined to obtain a single combined set of data. For example, gyroscope data separated into three axes may be combined into one axis through computing the vector length according to Equation 2 below. ( ) In Equation 2, gyr i denotes the vector length of the ith measurement, and gyr x i , gyr y i and gyr z i respectively denote the x, y and z components of the ith measurement.

In certain examples a fingerprint may be generated based on the sensor data, which may be filtered and/or combined using the techniques described above. However, in other examples, a fingerprint may be generated based on a function of the sensor data. For example, a fingerprint may be generated based on a derivative (e.g. first derivative) of the data, or any other suitable function. For example, if the sensor data comprises measurements gyr i , i = 1 , 2, ..., n, then the ith value of the first derivative, deriv i , data may be computed using Equation 3 below.

The use of either the original sensor data or a function (e.g. first derivative) of the sensor data may have their own strengths and weaknesses when applied to the fingerprinting techniques disclosed herein, for example in relation to entropy and noise. For example, in the case of gyroscope data, computing the first derivative may address issues resulting from sensor output values varying in scale. However, in some cases, using the original sensor data may provide better results.

Quantisation

Quantisation may be used to convert the sensor output values (possibly pre-processed) into corresponding binary values. For example, having sensor data in binary form allows for the generation of sensor fingerprints, the calculation of the Hamming Distance between two signals, and the application of different evaluation frameworks on the data. Any suitable quantisation technique may be used in various examples of the present disclosure.

In a typical quantisation technique, the minimum and maximum values within a dataset are determined and this range is divided into N sub-ranges, referred to as quantisation levels or quantisation intervals. Typically, the quantisation levels are uniform but may vary in size. Each quantisation level is assigned a different binary value. Typically, the number of quantisation levels is equal to an integer power of two, i.e. N=2 L , and the N possible L-bit binary values are assigned to respective N quantisation levels according to a one-to-one mapping. A sensor output value is converted into the binary value that assigned to the quantisation level into which the sensor output value falls. Typically, larger binary values are assigned to quantisation levels corresponding to higher sensor output values. The signal resolution refers to the number of bits used to represent a single value/datapoint in the sensor output data. A higher resolution results in a more accurately quantised signal allowing for as much entropy to be retained as possible. As mentioned above, the typical approach to quantisation is to set the minimum and maximum height of the quantisation lines based on the minimum and maximum values of the target signal. Quantisation intervals are subsequently placed uniformly between these boundaries (though some may opt for a non-uniform placement of these boundaries). In certain examples, a modified quantisation technique may be applied in which the data sequence is divided into two or more regions and quantisation may be performed on each region separately. In this case, each region may be treated effectively as its own signal as far as quantisation is concerned. For example, for a given region, the minimum and maximum sensor values within that region are determined, this range is divided into N quantisation levels, and each quantisation level is assigned a different binary value. The sensor values within that region are quantised according to these quantisation levels. A similar process is carried out for the other regions and the sensor values within those regions are quantised according to the quantisation levels of those regions. An example of this technique, in which 5 regions are used, is illustrated in Figure 4. However, the skilled person will appreciate that any suitable number of regions may be used. In certain examples, the data sequence regions may be the same size (i.e. each data sequence region may contain the same number of datapoints). However, in other examples, the data sequence regions may have different sizes. In certain examples, each data sequence region may be quantised at the same resolution (i.e. the same signal resolution is used for each data sequence region). However, in other examples, different signal resolutions may be used in different data sequence regions. The signal resolution(s) used for quantisation may be selected in consideration of a desired length of the fingerprint generated based on the quantised data, as discussed below. Fingerprint (Key) Generation Using Quantised Data A fingerprint or key may be generated based on the quantised data. For example, a fingerprint may be generated by combining the binary values of the quantised data using any suitable technique, for example by concatenation. In certain examples, the quantised data or the combined quantised data may be processed (e.g. by downsampling), for example to reduce the overall amount of data and hence the length of the fingerprint. For example, in certain examples, after the data has been quantised, the next stage is downsampling, and the downsampled data may be used to build the fingerprint/template. In one example, every x-th datapoint may be sampled and those sampled datapoints may be used to build the template. Any excess data (if more bits are obtained than needed) at the end of the downsampling may be truncated. The value x in the downsampling may be computed according to x=Total_Bits/Fingerprint_Length. Total_Bits is the total number of bits obtained from the quantisation, and may be equal to the number of datapoints multiplied by the quantisation resolution (i.e. number of bits per datapoint). Fingerprint_Length is the desired length of the fingerprint (e.g. 2048 or 4096). The skilled person will appreciate that any other suitable downsampling technique, or any other suitable technique for reducing the overall number of bits, may be used in other examples of the present disclosure. Any suitable fingerprint length may be used in various examples, allowing the use of a wide range of resolutions. For example, fingerprint sizes of 2048 or 4096 bits may be used. Larger fingerprint sizes benefit from reducing the amount of data omitted during the downsampling process, as well as improving the security. Though a relatively long (e.g.4096 bits) fingerprint may perform more favourably under certain metrics and may provide inherently stronger security than a relatively short (e.g.2048 bits) fingerprint, the constraints of limited processing power and power supply may still need to be considered. For this reason, a relatively short fingerprint (e.g.2048 bits) may be used. Applications of Sensor/Device Fingerprinting Various exemplary applications of sensor/device fingerprinting will now be described. However, the skilled person will appreciate that the present disclosure is not limited to these particular examples. One exemplary purpose of sensor fingerprinting is to reliably identify sensors in the physical layer. Such identification has many applications in different scenarios. For example, a sensor fingerprint may be used for authentication of the sensor for remote attestation or key generation between the IoT device and edge or cloud servers. It can also be used to generate a verifiable end-to-end provenance of the data from the end point to the storage (e.g. a remote cloud-based storage server). In this way, in certain examples the data gathered from sensors may be traced down to the individual sensors, providing extra verifiability and provenance to the collected data. Various examples of the present disclosure may provide an identification system in which a local hub may identify sensors for potential malfunction or damage control. In certain examples, it may be assumed that the IoT devices remain in the same location from the moment they are added to the IoT network and their sensors are registered. This is a reasonable assumption in many IoT contexts, e.g. an IIoT environment, a smart home, or a smart building. In certain examples, it may be assumed that the identification system is implemented and owned by the IoT network (see Figure 6). In certain examples, it may be assumed that physical access to sensors is not possible. Such sensors are properties of, for example: a smart home or a smart building and are physically protected by the owner. In certain examples, it may be assumed that all the registered sensors are uncompromised since there is no physical access to such sensors. The IoT devices can be compromised, e.g., via a malicious app or firmware update. If the IoT sensors do not have any processing capabilities, it may not be possible to prevent potential attacks on the compromised device’s sensors including impersonating a sensor by the IoT device itself. However, considering a sensor with processing capabilities (e.g., a Sensor Hub), the communication between the sensor and the identification system can be encrypted. In an exemplary implementation, the system may have public key encryption and signature keys, with copies of the certificates stored on the sensor. The sensor may encrypt every message it sends to the system with the system’s public key. The system signs every message it sends to the sensor with the system’s private key. For IoT systems, and for a practical solution, a light-weight design may be implemented. In the examples of Figures 2 and 3, such secure communications are not illustrated for simplicity. The skilled person will appreciate that in other examples the above assumptions are not required. Protocol Sequence As illustrated in Figures 4 and 5, the exemplary system has two main steps: Registration and Verification. In the following, each step is described in detail. In the sequence diagrams of Figures 4 and 5, the “Database” is indicated as a separated component. However, in various examples the database may be a part of an identification system and may be hosted on the device, edge, mobile, or the cloud, for example. Registration An exemplary registration process is illustrated in Figure 4. In this example, the registration process comprises requesting and collecting n (n=1, 2, 3, …) rounds of sensor calibration data from the to- be registered sensor in order to create a template to be used for comparison for future identification of the same sensor. In some examples, several rounds of data (i.e. n>1) may be requested to improve the accuracy of the template. In certain examples, a quantifiable metric, reliability (see Equation 4 below), may be used to measure the consistency of the sensor output signals with a value of 1 representing a perfect match. The reliability metric may measure the consistency of ‘0’ and ‘1’ values for each index of signal across the collected rounds. In certain examples, signals may average a value of 0.998. This value is close, but not equal, to 1 and thus may need to be accounted for with multiple rounds of data collection in certain examples. ) As described further above, following data collection, each round of data may go through pre- processing, for example where the data may be converted into a data format and length, as well as having any noise that is present removed. The data is then quantised and the template is generated. In a further step, the generated template is then encrypted and stored. There are many options available for securely storing a template, such as encrypting and encoding. An exemplary technique for storing a template is described further below. To generate a template, a single round of sensor sequence may be used. If several rounds of sensor sequences have been collected then the average (or any other suitable combination operation) of several rounds of sensor sequences may be used to generate the template. The system generates a reference finger-print f r (template) from the provided sensor sequence. A random key is generated and expanded to create a pseudo fingerprint fp = ErrEnc(k), where ErrEnc is an error-correction encoding scheme, for example based on Hadamard-Reed-Solomon or any other suitable error-correction encoding scheme. The pseudo fingerprint should be generated to be an appropriate length (i.e. the same length as a fingerprint). For example, using the above-mentioned error-correction encoding scheme, the key length requirement of the random key may be 140 and 364 bits for a 2048 and 4096-bit fingerprint, respectively. Following this, the encrypted fingerprint r = f r ⊕ f p is computed, and h = H(k), where H() is a secure one-way hash function. In certain examples, the original template fr and the random key k may be safely deleted as r and h are stored in the database for verification purposes. This process is summarised in Algorithm 1 below. Verification An exemplary verification process is illustrated in Figure 5. In this example, the verification process comprises requesting one round of calibration data fs from a chosen sensor. The system also fetches (r, h) from its database. With f s , r, h the following computations are performed. First, fs ⊕ r is computed. As illustrated in Equations 5a-e below, this is equivalent to computing f ' = e ⊕ ErrEnc(k), where e denotes an error between the sample fingerprint fs and the reference fingerprint f r . Equation 5a: illustrates an expansion including the most recent data sequence f s , the template f r , and the pseudo fingerprint f p from the registration process. Equations 5b/c: The newly generated data sequence f s and the template f r are XOR’d and represented as ‘e’, where e can be regarded as “noise”. Equation 5d: The equation is refactored to show f p as the result of the key expansion process ErrorEnc(). Equation 5e: f ' is then decoded as stated in Algorithm 2 to retrieve k′. If h == H(k′), then it is determined that the original k that was used to generate f p is successfully retrieved. This is only possible if the provided data sequence, fs, was within error correcting capabilities of ErrEnc(). Even in the event of an illegitimate signal falling within the error-correcting capabilities of ErrEnc(), it will still have to meet very stringent Hamming Distance criterion (0.07% dissimilarity) to be wrongly accepted by the above technique. The above process is summarised in Algorithm 2 below. At this point, the key and template have successfully been decrypted. With the original k retrieved, it is possible to expand it to generate fp and XOR it with r to acquire the unencrypted template, fr. In certain examples, the identification process may undergo a maximum of m attempts (for example m=3) if the received signal does not meet the Hamming Distance requirements. At which point, the system may suspend activity from the sensor, for example for a fixed duration, before repeating the process. Sensor Sequences Comparisons In certain examples of the present disclosure, the use of Hamming Distance may be employed to measure the similarity between any two given sensor sequences. The following are considered in the context of an identification system: a template (t i ) and input (m i ) from an arbitrary sensor i, a threshold tr, and a function f() to generate a similarity score between two inputs. mi is classed as coming from the sensor associated with the stored template ti if: f(ti, mi) < tr, and rejected if f(t i , m i ) ≥ tr, resulting in one of the following: ^ True Positive (TP) - Two readings are correctly classed as coming from the same sensor. ^ True Negative (TN) - Two readings are correctly classed as coming from different sensors. ^ False Positive (FP) - Two readings are incorrectly classed as coming from the same sensor. ^ False Negative (FN) - Two readings are incorrectly classed as coming from different sensors. The above results in a confusion matrix allowing for the following to be calculated: False Acceptance Rate (FAR), False Rejection Rate (FRR) and the Equal Error Rate (EER). During the comparison process of sensor sequences, two types of similarity scores may be obtained: authentic scores and imposter scores, which may be derived from intra-group and inter- group comparisons, respectively. Authentic Scores: In the context of an identification system, authentic scores may be derived from comparing the runtime calibration readings of a sensor against its own template. The template, which may be regarded as similar to a stored digitised fingerprint for a smart phone user, may be used to represent an entity. Authentic scores may be derived from intra-group comparisons comparing multiple sensor sequence rounds from a given sensor against its own template. The total number of authentic scores obtained may be calculated using Equation 6: With CollectedRounds referring to the total rounds of sensor sequence data that was collected, TemplateRounds referring to the number of rounds of data used to generate a template, and No.Sensors referring to the total number of sensors in a dataset. The results of these scores contribute towards the True Positive and False Negative components of the confusion matrix. Imposter Scores: In the context of an identification system, imposter scores may be derived from comparing the sensor sequence of sensor di against the template of sensor dj where di ^ dj. Imposter scores may be derived from inter-group comparisons comparing the sensor sequences of one sensor against every other sensor’s template. The total number of imposter scores may be be calculated through the following equation: ) Where UniqueRounds is the result of subtracting the number of rounds used for template generation against the total collected rounds, for one sensor. The results of these scores contribute towards the False Positive and True Negative components of the confusion matrix. Biometric Metrics As sensor fingerprinting relates to the field of biometrics by identifying entities based on their inherent physical characteristics, we apply metrics from this field to evaluate our system. Such biometric metrics include: Decidability: Decidability (Equation 8) measures how well separated the two distributions, inter and intra comparisons, are. Where σ1 2 and σ2 2 represent the standard deviations of Hamming Distances (HD) between samples from the intra and inter comparison groups, respectively. µ 1 and µ 2 represent the mean HD from each of these groups. Degrees of Freedom (DoF): DoF (Equation 9) represents the amount of entropy our fingerprints have. The higher the DoF, the more entropy that is present. Where µ is the mean HD of the inter-group and σ is the standard deviation of HD’s in this group. Equal Error Rate: As previously mentioned, the use of false acceptance rates and false rejection rates may be employed to evaluate the efficacy of an identification system based off of these fingerprints. The equal error rate is the point of intersection of these two metrics. Figure 5 illustrates an exemplary apparatus for registering a fingerprint and/or verifying the identity of a sensor using a fingerprint. The apparatus 500 comprises a processor or controller 501 for controlling the overall operation of the apparatus 500. For example, the processor 501 may be configured for performing operations as described above (e.g. calibration data collection, pre- processing, quantisation, etc.) for registering a fingerprint and/or for verifying the identity of a sensor using a fingerprint. The apparatus 500 also comprises a memory 503 for storing information and data required for the aforementioned operations. For example, the memory may be configured to store calibration data, pre-processed data, quantised data, a fingerprint generated based on quantised data, a reference fingerprint, etc.) The apparatus 500 also comprises an external interface 505 for interfacing with the sensor device 599 via any suitable communication link (e.g. wired or wireless). For example, under the control of the processor 501, the external interface 505 may be configured to transmit signals to the sensor/device 599 to trigger the sensor/device 599 to perform a calibration procedure. The external interface 505 may be configured to receive calibration data output by the sensor/device as a result of the triggered calibration procedure. In certain examples, the apparatus 500 may also comprise a user input/output (I/O) unit 507 for allowing a user to interact with the apparatus 500. For example, the user I/O unit 507 may comprise one or more input devices (e.g. a keyboard, touch screen, etc.) for inputting commands to the apparatus 500 (e.g. a command to initial sensor/device verification). The user I/O unit 507 may comprise one or more output devices (e.g. display, LEDs, speaker, etc.) for outputting information (e.g. status information) for the user. In certain examples, if the apparatus 500 is configured to operate autonomously, then the user I/O unit 507 may be omitted. In some examples, the apparatus 500 may be configured to interface with the sensor/device 599 in close proximity. In this case, the interface between the sensor/device 599 and the apparatus 500 may be a wired link or a relatively short-range communication link such as a Bluetooth or NFC link. In other examples, the apparatus 500 may be configured to interface with the sensor/device 599 remotely. In this case, the apparatus 500 may communicate with the sensor/device 599 via a network, for example the Internet. Figure 6 illustrates an exemplary environment (in this example an IoT environment) in which examples of the present disclosure may be implemented. The environment 600 may comprise one or more IoT systems. For example, in Figure 6, the environment 600 comprises a first IoT system 601 (e.g. a smart home) and a second IoT system 603 (e.g. an IIOT system). Each IoT system may comprise one or more IoT devices. For example, in Figure 6 the first IoT system 601 comprises IoT devices 1 to x, 605-1 to 605-x and the second IoT system 603 comprises IoT devices 1 to y, 607-1 to 607-y. In turn, each IoT device 605, 607 may comprise one or more sensors. For example, in Figure 6, IoT devices 605-1 and 607-1 include sensors numbered 1 to k, IoT devices 605-2 and 607-2 include sensors numbered k+1 to m, and IoT devices 605-x and 607-y include sensors numbered m+1 to n. The skilled person will appreciate that the present disclosure is not limited to the number, grouping and/or arrangement of sensors illustrated in Figure 6. The IoT systems may be connected by a network. For example, in Figure 6 the first and second IoT systems 601, 603 are each connected to cloud 609. The environment 600 also comprises one or more sensor identification systems for identifying one or more of the sensors. In the example of Figure 6, the first IoT system 601 comprises a first sensor identification system 611 and the second IoT system 603 comprises a second sensor identification system 613. Each sensor identification system may comprise an apparatus 500 as illustrated in Figure 5, for example. The first sensor identification system 611 may be coupled to one or more, or all, of the IoT devices 605-1 to 605-x of the first IoT system 601 and may be configured to identify one or more, or all, of the sensors 1 to n of the first IoT system 601 using any of the techniques disclosed herein. Similarly, the second sensor identification system 613 may be coupled to one or more, or all, of the IoT devices 607-1 to 607-y of the second IoT system 603 and may be configured to identify one or more, or all, of the sensors 1 to n of the second IoT system 603 using any of the techniques disclosed herein. In certain examples, the results of the identification may be stored (e.g. in the cloud 609) and/or transmitted to a remote apparatus (e.g. via the cloud 609). Certain examples of the present disclosure provide a method for identifying a device, the method comprising: obtaining calibration data from the device during a self-calibration procedure of the device (e.g. at start-up or during run-time); and verifying the identity of the device based on the obtained calibration data and previously stored data corresponding to previously obtained calibration data. Certain examples of the present disclosure provide a method for identifying a sensor, the method comprising: obtaining calibration data from the sensor during a self-calibration procedure of the sensor (e.g. at start-up or during run-time); and verifying the identity of the sensor based on the obtained calibration data and previously stored data corresponding to previously obtained calibration data. In certain examples, the method may further comprise: quantising the calibration data; and generating a bit sequence (e.g. sample fingerprint) based on the quantised data. In certain examples, quantising the calibration data may comprise generating binary values representing respective samples of the calibration data. In certain examples, generating the bit sequence may comprise downsampling the binary values. In certain examples, generating the bit sequence may comprise combining (e.g. concatenating) the binary values, or combining (e.g. concatenating) the downsampled binary values. In certain examples quantising the calibration data may comprise: dividing the calibration data into N portions; and quantizing each of the N portions of calibration data independently. In certain examples, the quantisation intervals for each portion of calibration data may be set based on: a minimum value of the quantisation data within each portion; a maximum value of the quantisation data within each portion; and a number of quantisation levels. In certain examples, the number of quantisation levels may be the same for each portion of the calibration data. In certain examples, the method may further comprise pre-processing the calibration data. In certain examples, pre-processing may comprise one or more of: combining two or more separate components of the calibration data (e.g. x, y, z components); filtering the calibration data (e.g. to remove noise and/or spikes); calculating a function (e.g. first derivative) of the calibration data. In certain examples verifying the identity of the sensor may comprise verifying whether the difference between the obtained calibration data and the previously obtained calibration data is less than a certain threshold. In certain examples the device may be a sensor. In certain examples the sensor may be a motion sensor. Certain examples of the present disclosure provide an apparatus for identifying a sensor, the apparatus being configured to: obtain calibration data from the sensor during a self-calibration procedure of the sensor (e.g. at start-up or during run-time); and verify the identity of the sensor based on the obtained calibration data and previously stored data corresponding to previously obtained calibration data. Certain examples of the present disclosure provide computer program comprising instructions which, when the program is executed by a computer or processor, cause the computer or processor to carry out a method according to any example, embodiment, aspect and/or claim disclosed herein. Certain examples of the present disclosure provide a computer or processor-readable data carrier having stored thereon such a computer program. Certain examples of the present disclosure provide one or more techniques as disclosed in the appended annex to the description titled “SenSig: Practical IoT Sensor Fingerprinting Using Calibration Data”. The skilled person will appreciate that any of those techniques may be applied in any suitable combination with any of the techniques described above and illustrated in the Figures. The skilled person will also appreciate that the techniques disclosed in the appended annex are examples and not intended to limit the present disclosure. The terms and words used in this specification are not limited to the bibliographical meanings, but are merely used to enable a clear and consistent understanding of the present disclosure. The same or similar components may be designated by the same or similar reference numerals, although they may be illustrated in different drawings. Detailed descriptions of elements, features, components, structures, constructions, functions, operations, processes, characteristics, properties, integers and steps known in the art may be omitted for clarity and conciseness, and to avoid obscuring the subject matter of the present disclosure. Throughout this specification, the words “comprises”, “includes”, “contains” and “has”, and variations of these words, for example “comprise” and “comprising”, means “including but not limited to”, and is not intended to (and does not) exclude other elements, features, components, structures, constructions, functions, operations, processes, characteristics, properties, integers, steps and/or groups thereof. Throughout this specification, the singular forms “a”, “an” and “the” include plural referents unless the context dictates otherwise. For example, reference to “an object” includes reference to one or more of such objects. By the term “substantially” it is meant that the recited characteristic, parameter or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement errors, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic, parameter or value was intended to provide. Throughout this specification, language in the general form of “X for Y” (where Y is some action, process, function, activity, operation or step and X is some means for carrying out that action, process, function, activity, operation or step) encompasses means X adapted, configured or arranged specifically, but not exclusively, to do Y. Elements, features, components, structures, constructions, functions, operations, processes, characteristics, properties, integers, steps and/or groups thereof described herein in conjunction with a particular aspect, embodiment, example or claim are to be understood to be applicable to any other aspect, embodiment, example or claim disclosed herein unless incompatible therewith. It will be appreciated that examples of the present disclosure can be realized in the form of hardware, software or any combination of hardware and software. Any such software may be stored in any suitable form of volatile or non-volatile storage device or medium, for example a ROM, RAM, memory chip, integrated circuit, or an optically or magnetically readable medium (e.g. CD, DVD, magnetic disk or magnetic tape). Certain embodiments of the present disclosure may provide a computer program comprising instructions or code which, when executed, implement a method, system and/or apparatus in accordance with any aspect, claim, example and/or embodiment disclosed herein. Certain embodiments of the present disclosure provide a machine-readable storage storing such a program. The techniques described herein may be implemented using any suitably configured apparatus and/or system. Such an apparatus and/or system may be configured to perform a method according to any aspect, embodiment, example or claim disclosed herein. Such an apparatus may comprise one or more elements, for example one or more of receivers, transmitters, transceivers, processors, controllers, modules, units, and the like, each element configured to perform one or more corresponding processes, operations and/or method steps for implementing the techniques described herein. For example, an operation/function of X may be performed by a module configured to perform X (or an X-module). The one or more elements may be implemented in the form of hardware, software, or any combination of hardware and software. While the invention has been shown and described with reference to certain examples, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention, as defined by the appended claims.