AHMED QADEER (US)
US20220019676A1 | 2022-01-20 | |||
US20210264038A1 | 2021-08-26 | |||
US20210226980A1 | 2021-07-22 | |||
US20150195297A1 | 2015-07-09 | |||
US20220210200A1 | 2022-06-30 |
WHAT IS CLAIMED: 1. A method for performing vulnerability analysis, the method comprising: providing a system model of a vehicular control system; determining a plurality of attack vectors based on the system model; generating an attacker model based on the plurality of attack vectors; determining a number of vulnerabilities in the vehicular control system based on at least the attacker model and the system model; outputting an attackability index based on the number of vulnerabilities. 2. The method of claim 1, wherein the plurality of attack vectors comprise a plurality of unprotected measurements. 3. The method of claim 2, wherein at least one of the plurality of unprotected measurements is associated with a sensor. 4. The method of claim 2, wherein at least one of the plurality of unprotected measurements is associated with an actuator. 5. The method of any one of claims 2‐4, further comprising recommending a design criteria to protect a measurement from the plurality of unprotected measurements based on the attackability index. 6. The method of claim 5, wherein the design criteria comprises a location in the vehicular control system to place a redundant sensor, a redundant actuator, a protected sensor, or a protected actuator. 6. The method of any one of claims 2‐4, further comprising providing, based on the attackability index, the vehicular control system, wherein a measurement from the plurality of unprotected measurements is protected in the vehicular control system. 7. The method of any one of claims 1‐6, wherein the vehicular control system comprises a Lane Keep Assist System. 8. The method of any one of claims 1‐7, wherein the vehicular control system comprises an actuator. 9. The method of any one of claims 1‐8, wherein the vehicular control system further comprises a communication network. 10. The method of any one of claims 1‐9, further comprising evaluating the attackability index using a model‐in‐loop simulation. 11. A method of reducing an attackability index of a vehicular control system, the method comprising: providing a system model of the vehicular control system, wherein the system model comprises a plurality of sensors; determining a plurality of attack vectors based on the system model; generating an attacker model based on the plurality of attack vectors; determining a number of vulnerabilities in the vehicular control system based on at least the attacker model and the system model; outputting an attackability index based on the number of vulnerabilities; and selecting a sensor from the plurality of sensors to protect to minimize the attackability index. 12. The method of claim 11, wherein the vehicular control system comprises a Lane Keep Assist System. 13. The method of claim 11 or claim 12, wherein the vehicular control system comprises an actuator. 14. The method of any one of claims 11‐13, wherein the vehicular control system further comprises a communication network. 15. The method of any one of claims 11‐14, further comprising generating a residual based on the system model. 16. The method of any one of claims 11‐15, further comprising determining where in the system model to place a redundant sensor. 17. The method of any one of claims 11‐16, further comprising identifying a subset of redundant sensors in the plurality of sensors. 18. The method of any one of claims 11‐17, further comprising evaluating the attackability index using a model‐in‐loop simulation of the system model and the attacker model. 19. The method of any one of claims 11‐18, further comprising identifying a redundant section of the system model and a non‐redundant section of the system model. 20. The method of claim 19, further comprising mapping the plurality of attack vectors to the redundant section of the system model and the non‐redundant section of the system model. |
[00159] [00160] [00161] The equations e1‐e17 can be represented in a state space form for the plant model as in equation 1. Where the state vecto r is given by: [00162] [00163] The input to the power steering module is the motor torque from the controller and the output is the lateral deviation is the desired yaw rate given as disturbance input to avoid sudden maneuvers, to enhan ce the user comfort. [00164] The optimal control action to steer the vehicle back to the lane center is given by the solving the quadratic optimization probl em given in e18. [ 00165] [00166] The example attacker model used in the study is def ined based on conjecture 1. Since, in this paper we focus on auto motive systems, we identify the protected and unprotected sensors and actuators from analyzing the CAN Database (DBC) files from [20]. Hence an attack vector A i is added to the dynamic equation of the unprot ected measurement. Also, note that redundancy in the messages published on CAN is not accounted as ARR. [00167] The sensor and the actuator dynamics varies depending on the device and the manufacturer configuration. Thus there are mu ltiple configurations of the sensor suite in the ALC system that OEM's implement based on the space, computational power and market value of the vehicle. e19 is the required torque to be applied on the steering column by the motor. The LKAS calculates the required steering angl e based on the sensor values on CAN and determines the required torque to be applied by the motor and publishes the value on the CAN. Thus, the actuator attack A 1 manipulates the required torque. e20e28 are se nsor dynamics where A4‐A8 are sensor attacks that could be imple mented through attacking the CAN of the vehicle. Attacks A2, A3, and A9 are physical‐world adversarial attacks on lane detection using camera as shown in [10]. [00168] In the study, analyzing the structural model of the system included a step to identify the known and unknown parameters in the system. The unknown set of parameters are not measured quantities. Hence from e1‐e28, the state vector x and the set can be the unknown parameters. While the measurements from the sensors are the known and measured parameters . Note that the parameter is not known until it is measure d using the sensor. For example, is unknown while from the torque sensor is known. The structural matr ix of the LKAS is given in FIG. 7, where plot 702 is for car 1, plo t 706 is for car 2, and plot 710 is for car 3. The DM decomposition of the LKAS is given in FIG. 7 in pl ot 704 for car 1, plot 708 for car 2, and plot 7 12 for car 3. Thus, from the DM decomposition, it is evident that the attacks a1 and a3 are not detectable. [00169] Faults are usually defined as abnormalities in the s ystem while attacks are precise values that are added to the system wit h the main intention to disrupt the performance and remain undetected by the system opera tor. Thus, faults are usually a subset of attack space while the attacks are targeted to b reak the Confidentiality, Integrity and Availability (CIA) of the system. [00170] From the DM decomposition of the system, the study can determine that the over‐determined part has more number of constra ints than variables. Hence any fault or attack on the measurements from the structurally over ‐determined part can be determined through residues, generated with the help of ARR. Th e main difference between faults and attacks in terms of detectability is shown in Theore m 3. [00171] Theorem 3: A Minimal Test Equation Support (MTES) is sufficient to detect and isolate faults while, maximizing the resid ues increases security index for attacks. [00172] Example 2: [00173] A study was performed of an example implementation o f the present disclosure. The example implementation includes securit y risk analysis and quantification for automotive systems. Security risk analysis and quantif ication for automotive systems becomes increasingly difficult when physical systems are integ rated with computation and communication networks to form Cyber‐Physical Systems (CPS). This is because of numerous attack possibilities in the overall system. The examp le implementation includes an attack index based on redundancy in the system and the computatio nal sequence of residual generators based on an assumption about secure signals (actuator /sensor measurements that cannot be attacked). This study considers a nonlinear dynamic m odel of an automotive system with a communication network ‐ Controller Area Network (CAN ). The approach involves using system dynamics to model attack vectors, which are based on the vulnerabilities in the system that are exploited through open network components (open CAN p orts like On‐Board‐Diagnosis (OBD‐ II)), network segmentation (due to improper gateway i mplementation), and sensors that are susceptible to adversarial attacks. Then the redundant and non‐redundant parts of the system are identified by considering the sensor configuration and unknown variables. Then, an attack index is derived by analyzing the placement of attac k vectors in relation to the redundant and non‐redundant parts, using the canonical decompositio n of the structural model. The security implications of the residuals are determined by analy zing the computational sequence and the placement of the protected sensors (if any). Then, b ased on the analysis, sensor placement strategies are proposed, that is, the optimal number of sensors to protect to increase the system's security guarantees are suggested. The study verifies how the example implementation of an attack index and its analysis c an be used to enhance automotive security using Model‐In‐Loop (MIL) simulations. [00174] Increased autonomy and connectivity features in vehicl es can enhance drivers' and passengers' safety, security, and conveni ence. Integrating physical systems with hardware, computation, and communication networks intro duces a Cyber‐Physical layer. This development of CyberPhysical Systems (CPS) paves the way for multiple security vulnerabilities and potential attacks that concern the safe operation of autonomous vehicles. Researchers have successfully exploited these vulnerabilities that potentially lead to safety and privacy hazards [1A]‐[3]. Thus, the two critical aspects of the automotive system: safety and security go hand‐in‐hand. However, the security of CPS is mor e abstract, and unlike safety, it may not be defined as a functional requirement [4A]. A major ro adblock can be the lack of resources to express and quantify the security of a system. This example implementation of the present disclosure studied can performing a vulnerability anal ysis on an automotive system and quantifying the security index by evaluating the diff iculty in performing the attack successfully without the operators' (drivers') knowledge. [00175] Faults are a major contributor to the activation of safety constraints in a system, unlike attacks that are targeted and intentio nal. Apart from disturbances, any deviation from the expected behavior of a system is considered a fault and may arise due to various reasons, such as malfunctioning sensors, actuators, or controllers failing to achieve their optimal control goal. The concepts of Fault‐Tolerant ‐Control (FTC) [5A] and Fault Diagnosis and Isolability (FDI) [6A] can be used to mitigate fault s in a system. A structural representation of a mathematical model can be used for determining redund ancies in the system. Residuals computed from these redundancies can then be used to detect and isolate faults. In contrast, attacks exploit system vulnerabilities such as imprope r network segmentation (improper gateway implementation in CAN), open network component s (OBD‐II), or sensors exposed to external environments (GPS or camera). An attack is successful if it is stealthy and not detected in the system [7A]. The system will show a failed attack as an abnormality or a fault and will alert the vehicle user. [00176] An observable system with Extended Kalman Filter (EKF ) and an anomaly detector are attackable [8A], and the sensor attack is stealthy as long as the deviation in the system states due to the injected falsified measureme nt is within the threshold bounds. This additive attack eventually drives the system to an u nsafe state while remaining stealthy. However, the attack proposed is complex in time and computation as multiple trial‐and‐error attempts are required to learn a stealthy attack sig nal. Also, the stealthy execution of the attack becomes very complex due to the dynamic nature of d riving patterns. Also, the attack fails if the system uses a more complex anomaly detector like CUmulative SUM (CUSUM) or Multivariate Exponentially Weighted Moving Average (MEW MA) detectors instead of the standard ChiSquared detectors. Apart from observer‐ba sed techniques, the anomaly detectors could also be designed based on the system's redunda ncies and still involve the tedious procedure of identifying the specific set of attack vectors to perform a stealthy, undetectable attack. [00177] There are limited methods available for analyzing and quantifying security risks in automotive systems. A security inde x [9A] can represent the impact of an attack on the system. This [10A] defines the condition for the perfect attack as the residual . An adversary can bias the state away from the op erating region without triggering the anomaly detector. Based on the conditions for pe rfect attackability, a security metric can identify vulnerable actuators in CPS [11A]. The secur ity index can be generic using graph theoretic conditions, where a security index is based on the minimum number of sensors and actuators that needs to be compromised to perform a perfectly undetectable attack. That example can perform the minimum s-t cut algorithm ‐ the problem of finding a minimum cost edge separator for the source (s) and sink (t) or the input (u) and output (y) in polynomial time. [12A] However, these security indices , designed for linear systems, do not analyze the qualitative properties of the system whil e suggesting sensor placement strategies. Also, their security indices do not account for the existing residuals used for fault detection and isolation. Sets of attacks, such as replay attacks [ 14A], zero‐dynamics attacks [15A], and covert attacks [16A], make the residual asymptotically conver ge to zero, similar to the class of undetectable attacks. But the detection techniques tha t work for undetectable attacks fail for stealthy integrity attacks. [11A, 13A] [00178] The example implementation of the present disclosure includes a robust attack index and includes design of sensor configurat ions and variations to the automotive system parameters to minimize the attack index are s uggested, which in turn, increases the security index of the system. This approach of analy zing the security index of the system is an addition to [17A], which performs vulnerability analys is on nonlinear automotive systems. The example implementation described herein can identify t he potential vulnerabilities that could be exploited into attacks in an automotive system. T hese are generally the sensor/actuator measurements that are openly visible on CAN and sens ors exposed to external environments that are susceptible to adversarial attacks. They are categorized as unprotected measurements. A system model (e.g., a grey‐box model with input output relations [17A]) is defined, and the redundant and non‐ redundant parts of the system c an be identified using canonical decomposition of the structural model. The attacks ar e then mapped to the redundant and non‐redundant parts. Structural analysis [6A] can sh ow that anomalies on the structurally redundant part are detectable with residuals. The stu dy of the example implementation evaluates different residual generation strategies and suggests the a most secured sequential residual among various options with respect to the s ensor placement. Then the most critical sensor to protect to reduce the attack index and im prove the overall security of the system can be suggested. As used in the study described herein, it is assumed that the protected sensors cannot be attacked. [00179] The example implementation of the present disclosure can include any or all of: [00180] (A) An attack index for an automotive system based on the canonical decomposition of the structural model and sequential residual generation process is derived, where the attack index is robust to nonlinear system parameters. [00181] (B) The proposed attack index weighs the structural location of the attack vectors and the residual generation process based on the design specifications. The complexity of attacking a measurement is based on the redundanc y of that measurement in the system and if that redundant measurement is used for residu al generation. [00182] (C) To reduce the attack index, a most suitable set of sensor measurements to protect is identified by analyzing th e structural properties of the system. Then, sequential residuals are designed using the set of protected sensors to avoid perfectly undetectable attacks and stealthy integrity attacks. T his strategy works well with the existing fault diagnosis methods, is cost efficient (in avoidi ng redundant sensors), and can give Original Equipment Manufacturers (OEMs) freedom to implement th e security mechanisms of their choice. The results of the study are validated using MIL simulations with the example implementation. [00183] FIG. 3 illustrates an example feedback control system with a network layer between the controller and actuator. The attack er attacks the system by injecting signals by compromising the network or performing adversarial attacks on sensors. [00184] The study of the example implementation includes a s ystem model. [00185] A cyber‐physical system can be defined by nonlinear dynamics [ 00186] [00187] where and are the state vector, control input, and the sensor measurements. Based on [18A] and [19A ], the nonlinear system can be uniformly observable. That is, and ℎ are smooth and invertible. The linearized Linear Time‐Invariant (LTI) version of the plant is given by and where and are the system, input, and output matrices respectively. [00188] The study of the example implementation includes an attacker model. [00189] The attacker model can be given by: [ 00190] [00191] where and are the actuator and sensor attack vectors The compromised state of the system at any time ( k) can be linearized as . Where is the actuator attack signal injected by the attacker. Simi larly, is a compromised sensor measurement and is the attack injected. and are the noncompromised actuator and sensor signals. [00192] Assumption 1: For any system, protected measurements cannot be compromised. The sensor and actuator measurements that can be attacked are unprotected measurements, and those measurements that cannot be a ttacked are protected measurements. [00193] Note that there are multiple ways to protect a sens or or actuator measurement, and it is mostly application and network configuration specific. Techniques on how to select a sensor measurement to protect are d iscussed throughout the present disclosure. [00194] The study of the example implementation includes a s tructural model. [00195] The structural model is used to analyze the system's qualitative properties to identify the analytically redundant part [6A]. The free parameters in a system realization are the non‐zero positions in the struc tural matrix [12A]. The structural model ℳ is given by ℳ where ℰis the set of equations or constraints and is the set of variables that contain the state, input, output and the attack vectors. The variables can be further grouped as known and unknown The model ℳ can be represented by a bipartite g raph . In the bi‐partite graph, the existence of variab les in an equation is denoted by an edge . The structural model ℳ can also be represented as an adjacency matrix ‐ a Boolean matrix with rows corresponding to and columns to otherwise }. [00196] Definition 1:(Matching) Matching on a structural model ℳ is a subset of such that two projections of any edges in ℳ are injective. This indicates that any two edges in do not share a common node. A matching is maximal if it contains the largest number of edges (maximum cardinality) and perfect if all the v ertices are matched. The non‐matched equations of the bipartite graph represent the Analyt ically Redundant Relations (ARR). [00197] Structural analysis can be performed to identify matc hings in the system. An unknown variable can be calculated from a constra int or an equation. If they are mapped to multiple constraints, then they contribute to redundan cy in the system, which can be used for abnormality detection. Based on the redundancy, the s ystem can be divided into three submodels: under‐determined (no. of unknown variables > no. of constraints), just‐determined (no. of unknown variables = no. of constraints), and over‐determined part (no. of unknown variables < no. of constraints). The different par ts (underexactly and over‐determined parts) of the structural model ℳ can be identified by using the DMD. DMD is obtained by rearranging the adjacency matrix in block triangular form. The u nder‐determined part of the model is represented by with node sets and the just‐determined part is represented by with node sets and , and the over‐determined part is represented by with node sets and . The just and over‐determined parts are the obser vable part of the system. Attack vectors in the under‐determined and justdetermined part of the system are not detectable. While Attack vectors in the over‐determined part of the system are detectable with the help of redundancies [6A], which can be used to for mulate residuals for attack detection. [00198] The example implementation of the present disclosure can include methods of determining an attackability index. [00199] The attackability index can be based on the number of vulnerabilities in the system, which could potentially be exploited into attacks, i.e., it is proportional to the number of sensors and actuators that can be compromi sed or the number of unprotected measurements in the system. Thus, larger the attack index, the more vulnerable the system. [00200] Let be the attack vector. The attackability index α is proportional to the number of non‐zero elements in α and is given by: [ 00201] [00202] Where is the penalty added depending on the attack, based on whether the attack vector is in the under, just, or overdetermined part. Thus for every attack vector in α, a penalty is added to the index α. The attack becomes stealt hy and undetectable if it is in the under or just‐determi ned part of the system, and at the same time, it is easier to perform the attack. Hence a larger pen alty is added to α. If the attack is in the over ‐ determined part, the complexity of performing a steal thy attack increases drastically due to the presence of redundancies. Hence a smaller penalty is added. R denotes the residuals in the system for anomaly detection, and are the weights added to incentivize the residuals f or attack detection based on the residual generation pro cess. Similar to attacks, for every residue in the system, a weight is added. [00203] The overall security goal of the example system is to minimize the attackability index: minimize α with respect to the attacker model as defined in (2) and maximize the number of protected residuals when This security goal can be achieved in two ways: (i) Replace unprotected measurements with p rotected measurements. However, this is not feasible as it requires a drastic change in the In‐Vehicle Network (IVN). Research along this direction can be found in [20A] (ii) Introduce redundancy in the system to detect abnormalities. With redundancy in the system, residual s can be generated, and a detector can be designed to identify abnormalities. In this way, the system might still be susceptible to attacks, but a stealthy implementation of the attack is arduous as the attacker must compromise multiple measurements. Suppose the attacker fails in performing a stealthy attack, the abnormalities in the measurements introduced by t he attacker are shown as faults in the system, and the vehicle user is alerted of potential risks. [00204] Preliminaries and definitions are used herein [17A]. Consider the system and attacks as discussed in (1) and (2). From the part of the DMD, residuals can be generated using the redundant constraints and can be checked for consistency. The structure of the residual is the set of constraints monitorable s ub‐graphs with which they are constructed. The monitorable sub‐graphs are identified by finding the Minimal Structurally Over‐determined (MSO) set as defined in [21A]. [00205] Definition 2: (Proper Structurally Over‐determined (P SO)) A non‐empty set of equations is a PSO set if [00206] The PSO set is a testable subsystem, which may cont ain smaller subsystems ‐ MSO sets. [00207] Definition 3:(Minimal Structurally Over‐determined (MS O)) A PSO set is an MSO set if no proper subset is a PSO set. [00208] MSO sets are used to find a system's minimal testab le and monitorable sub‐graph. [00209] Definition 4: Degree of structural redundancy is give n by [00210] Lemma 1: If ℰ is a PSO set of equations with then . [00211] Lemma 2: The set of equations ℰ is an MSO set i f and only if ℰ is a PSO set and [00212] The proof of Lemma 1 and Lemma 2 is given in [21A ] by using Euler's totient function definition [22A]. [00213] For some MSO sets identified according to Lemma 2, a set of equations called the Test Equation Support (TES) can be formed to test for faults or attacks. A TES is minimal (MTES) if there exist no subsets that are T ES. Thus, MTES leads to the most optimal number of sequential residuals by eliminating unknown variables from the set of equations (parity‐space‐like approaches). [00214] Definition 5: (Residual Generator) A scalar variable R generated only from known variables (z) in the model M is the residual generator. [00215] The anomaly detector looks if the scalar value of t he residual (usually a normalized value of residual R t ) is within the threshold limits under normal operating conditions. Ideally, it should satisfy (zero‐mean). [00216] An MTES set might involve multiple sensor measurement s and known parameters in the residual generation process. The ge nerated residual is actively monitored using an anomaly detector (like the Chi-squared detec tor). [00217] The system as defined in (1) is not secure if (i) There exists an attack vector that lies in the structurally under or just determined part. The consequence of the attack is severe if there is a significant deviation of th e state from its normal operating range. is the unbounded condition for the attack sequence. [00218] Note that a similar definition would be sufficient f or any anomaly detector. This work focuses on compromising the resid ual generation process and not the residual evaluation process ‐ the residual is compr omised irrespective of the evaluation process. The measurements from the system are categor ized as protected and unprotected measurements. From the system definition, it is infer red that not all actuators and sensors are susceptible to attacks. Thus, the attacker can inject attack signals only to those vulnerable, unprotected sensors and actuators. [00219] The example implementation can determine an attack in dex of a system. [00220] The attack index is determined according to (3), and this section discusses how the weights for the attack index in (3) are es tablished. [00221] A vertex is said to be reachable if there exists a t least a constraint that has an invertible edge (e,x). As used in the presen t example, an attack weight of the scale is used, where represents the penalty for a stealthy attack vector that is very hard to implement on the system due to the presence of residuals and anomaly detectors and represents the penalty for an attack vector that com promises the part of the system without residuals and anomaly det ectors. For example, a safety critical component without any security mechanism to protect i t will have a very large weight (say, . [00222] Similarly, the weight of the residuals is of the sc ale Where represents the residuals that cannot be compromised e asily and represents the residuals that can be compromised easi ly. Note that the weights are not fixed numbers as they can be changed based on the severit y of the evaluation criterion and could evolve based on the system operating conditions. [00223] Proposition 1: The just or under‐determined part of the system with unprotected sensors and actuators has a high attack index: . [00224] Proof: Undetectable attack vectors from sensors and a ctuators are the primary reason for the higher attack index. Due to the lack of residuals, the attack vector α i is not detectable. From definitions 3, 4, lemma 1, and 2: [00225] [00226] Any attack on is not detectable as residual generation is not po ssible. For the just‐determined part of the system, anomaly detection can only be achieved by introducing redundancy in the form of additional sens ors or prediction and estimation strategies. The over‐determined portion of the syste m can still be vulnerable to attacks; however, these attacks can be detected through the r esiduals generated from MSO sets. Thus, the complexity of performing a successful attack is high, which leads to proposition 2. [00227] Proposition 2: The over‐determined part of the syst em with unprotected sensors and actuators is still attackable and has a low attack index due to the complexity of performing an attack: . [00228] Proof: From assumption 1, the system is attackable i f it has unprotected sensors and actuators. To perform a stealthy attack, the attacker should compromise the unprotected sensors without triggering any residuals. Hence, the condition for detectability and existence of residuals is from definition 5, is an ARR for all where is the set of observations in the model ℳ. The ARRs are from co mplete matchings from in MSO sets, provided the ARRs are invertible and variables can b e substituted with consistent causalities. [00229] The condition for the existence of residuals in line ar systems is discussed in [23A] and non‐linear systems in [24A]. Propositi on 2 shows that unprotected measurements cause vulnerabilities in the system that could lead to attacks. However, these attacks are detectable with residuals in the system. Thus, strate gies to evaluate residuals are described herein. [00230] From the DMD of the system, it is inferred that th e overdetermined part has more constraints than variables. Hence any fault or attack on the measurements from the structurally overdetermined part can be detected throu gh residuals generated with the help of ARR. So, this section suggests a criterion for the placement of protected sensors for a sequential residual generation to maximize the system' s security. [00231] For a residual R, consider a matching M with an ex actly determined set of equations E. Let b i be a strongly connected component in M with M i equations if be the set of equations measuring variables in b i . Also, b i is the maximum order of all blocks in M. Let be the set of structurally detectable attacks. Let be the set of possible sensor locations that could be protected, denote the secur ed detectability of attacks, and denote the set of equivalent attacks. [00232] Theorem 1: Then, maximal security through attack dete ctability of is achieved by protecting the strongly connected comp onent in the sequential residual: [00233] Proof: From the definition of DMD [25A] and Definiti on 4, for M, the family of subsets with maximum surplus is given by [00234] Where ℒ is the sublattice of M. Also, and for the partial order sets . Thus, the minimal set E in ℒ such that e i measures achieves maximal detectability. [00235] Theorem 1 shows that securing the strongly connected component can detect attacks that affect that component. In other words, an attack in a strongly connected component compromises all its sub‐components as they are in the same equivalence relation. From [25A], it is evident that measuring the block with the highest order gives maximum detectability. Similarly, here we say that attack on the block with the highest order gives maximum attackability. The highest block component can also be a causal relation of a protected measurement. [00236] An alternate way of Theorem 1 can be stated as fol lows: A secured sequential residual for attack detection and isolation has matching with a protected state of the system at the highest ordered block. The residual eq uation is formulated with the protected measurement and estimate of that measurement. Since i t is assumed that system (1) is uniformly observable, the protected measurement could be observed from other measurements. The strongly connected component can be estimated from other measurements and can be compared with the protected sensor measurement. This comparison can be used to find faults/ attacks on the measurem ents that were used to compute the strongly connected component. [00237] Thus, a residual R i generated from M with is attackable as A belongs to the same equivalence class. Also, if is a block of the order less than that of b i . Then residual from M with can be detected as R i has maximum detectability and That is, there are no attacks in the block of ma ximum order. Hence, from theorem 1, the following can be formulated: [00238] residuals computed with unprotected sensors are attack able and have [00239] While residuals computed with protected sensors are m ore secure and have [00240] Also, the alt of Theorem 1 can be used to identify the critical sensors that must be protected to maximize the overall security i ndex of the system. [00241] The study included an example implementation including Automated Lane Centering System (ALC). A complete Lane Keep As sist System (LKAS) with vehicle dynamics, steering dynamics, and the communication net work (CAN) was considered, and example parameters for the example lane keep assist system are shown in FIG. 8. [00242] A controller, typically either a Model Predictive Con troller (MPC) [26A] or a Proportional‐Integral‐Derivative (PID) controller [27A], is employed as demonstrated in the LKAS shown in FIG. 9. Its purpose is to actuate a DC motor that is linked to the steering column, thereby directing the vehicle towards the center of the lane. The LKAS module has three subsystems: (i) the vehicle's lateral dynamics control system [e1‐e6] and its sensor suite [e8‐ e13], (ii) the steering system ‐ steering column [ e14‐e17], the power assist system [e18‐e20], and steering rack [e21‐e23] with sensor suite [e24 e26]. In the LKAS setup, an Electronic Control Unit (ECU) is utilized, which is equipped with senso rs to detect various vehicle parameters such as steering torque, steering angle, lateral deviation, lateral acceleration, yaw rate, and vehicle speed. The mechanical arrangement of the LKAS and th e dynamic model of the vehicle is as discussed in [28A]. The parameters of LKAS and the constants are as defined in [29A] and [26A]. [00243] The dynamic equations of the LKAS module without dri ver inputs at time t are given by:
Where and
[00244] [00245] The dynamic equations described above are non‐linear . The structural analysis is qualitative and is oriented towards the existence of a relationship between the measurements rather than the specific nature of the relation (like linear or nonlinear relation). The analysis remains valid until the nonlinear functi ons are invertible. However, the nonlinear dynamic equations can be approximately linearized arou nd the operating point if needed. The state vector is given by: [00246] The attacker model used in the example implementation is defined based on assumption 1. Since the study focuses on automoti ve systems, the protected and unprotected measurements are identified by reading the CAN messages from the vehicle and analyzing them with the CAN Database (DBC) files fro m [31A], and adding an attack vectorA i (where i is the attack vector number) to the dynami c equation of the unprotected measurements. The unprotected measurements are the one s that are openly visible on CAN and camera measurements that are susceptible to adver sarial attacks. Also, note that redundancy in the messages published on CAN is not accounted as ARR. [00247] Based on the information obtained from the sensors o n the CAN, the LKAS computes the necessary steering angle and torque to be applied to the motor. The calculated values are transmitted through the CAN, wh ich the motor controller uses to actuate the motor and generate the necessary torque to ensur e that the vehicle stays centered in the lane. The actuator attack A 1 manipulates the required torque. When the torq ue applied to the motor is not appropriate, it can result in the vehi cle deviating from the center of the lane. e8‐ e13 and e24e26 are sensor dynamics where A 2 -A 10 are the sensor attacks. Attacks A 2 and A 3 are physical‐world adversarial attacks on perception sensors for lane detection as shown in [32A]. Other attacks are implemented through the CAN. [00248] An example step in structural analysis is to identif y the known and unknown parameters. The parameters that are not measu red using a sensor are unknown . From the dynamic equations, we have the state vec tor x and the set as unknown parameters. The measurements from the sens ors are the known parameters Note that a parameter is unknown in the study until it is measured using the sensor. For example, is unknown while from the torque sensor is known. The structural matr ix of the LKAS is given in FIG. 10A, and the DMD of the LKAS is given in FIG. 10B. The dot in the structural matrix and DMD implies that the variable in X‐axis is related to the equation in Y‐axis. From the DMD, it is clear that the attack s on the just‐determined part and are not detectable and other attacks on the over determined part are detectable. The equivalence class is denoted by the grey‐shaded part in the DMD (FIG. 10B), and the attacks on different eq uivalence classes can be isolated from each other with test equations or residuals. steering modu le is the motor torque from the controller. can be given as the desired yaw rate as a distur bance input to avoid sudden maneuvers, to enhance the user comfort [26A]. The optimal control action to steer the vehicle back to the lane center is given by solving the quadratic optimization problem with respect to the reference trajectory: [00249] [00250] Equation (e7) is the required motor torque calculated by the controller. The steering wheel torque (e25), wheel speed (e11), yaw rate (e12), and lateral acceleration (e13) sensors have been mandated by National Highway Traffic Safety Administration (NHTSA) for passenger vehicles since 2012 [30A]. [00251] The attacks are detectable and isolable. The residuals generated (TES) that can detect and isolate the atta cks are given by the attack signature matrix in 11. The dot in the attack signature matrix repre sents the attacks in the X‐axis that the TES in Y‐axis can detect. For example, the TES-1 (residual ‐1) can detect attacks 6 and 7. [00252] The study considered hypothetical cases by modifying the sensor placement for the residual generation to derive the overall attack index. In the present example, the most safety‐critical component of the LKAS ‐ Vehicle dynamics and its sensor suite is considered for further analysis [e1‐e13]. The LK AS is simulated in Matlab and Simulink to evaluate the attacks, residuals, and detection mechani sm [33A]. The structural analysis is done using the fault diagnosis toolbox [34A]. Let us assu me the following weights for ^ and : [00253] [00254] All the attacks and residuals are equally weighted for the sake of simplicity. It should be u nderstood that the attacks and residuals can have any weight, and that the weights provided herei n are only non‐limiting examples. [00255] The study included simulations to support the proposi tions 1 and 2. For the scope of this paper, only the residual plots an d analysis for TES-1 (FIG. 11) are shown. However, the analysis could be easily extended to al l the TES and even larger systems. TES-1 is generated from the equation set: . For the attacks on the justdetermined part, actuator attack is simulated, and attack is as shown in [32A]. Assuming that there are no protected sensors, the re siduals are generated from the most optimal matching ‐ the one with minimum differentia l constraints to minimize the noise in the residuals (low amplitude and high‐frequency noise do not perform well with differential constraints). The residual generation process for TES- 1 is shown in FIGS. 12A‐12C. For example, the residual generated for the sensor placement with graph matching as shown in FIG. 12A Matching‐2 has the Hasse Diagram as shown in FIG. 12B and computational sequence as shown in FIG. 12C. [00256] The following results of the study illustrate the ef fectiveness of the example implementation through simulations: [00257] TES-1 (residual R 1 ) to detect attacks A 6 and A 7 under non‐stealthy case: The residual R 1 , as shown in FIG. 12C, can be implemented in Matlab. Naive attacks A 6 and A 7 are implemented without any system knowledge. The att acks A 6 and A 7 are waveforms with a period of 0.001 seconds and a phase delay of 10 se conds. The residual R 1 crosses the alarm threshold multiple times, indicating the presence of attacks as shown in FIG. 15B. While FIG. 15A shows the performance of residual during normal unattacked operating conditions. The attacker fails to implement a stealthy attack on the system. This simulation supports proposition 2 that the attacks on the overdetermined part of the system are attackable but also detectable with residuals. [00258] Actuator attack A 1 on just‐determined part of the system: Attac k A 1 is an actuator attack on the just‐determined part of the system. As shown in proposition 1, residuals cannot be generated to detect the attack due to the lack of redundancy. Thus, the attack is stealthy and safety‐critical on the system. The att ack A 1 taking the vehicle out of the lane is shown in FIG. 16. Also, the attack does not trigger any other residuals in the system. The attack A 1 evaluated with residual R 1 is shown in FIG. 17A. This simulation support s proposition 1, that the attacks on the just determined part of the syst em are not detectable. [00259] Stealthy attack vectors that attack the system but d o not trigger the residual threshold: As shown in FIG. 15C, the attack er can implement a stealthy attack vector on the yaw rate and lateral acceleration sensor. In this case, the attacker has complete knowledge of the system and residual generation proce ss. The attacker is capable of attacking the two branches in the sequential residual – FIG. 12C simultaneously. Hence, attacks the system with high amplitude, slow‐changing (low frequ ency), disruptive, and safety‐critical attack vectors. As shown in the example – FIG. 15 C, the residual detection is completely compromised. This simulation again supports proposition 2, showing that an intelligent attacker could generate a stealthy attack vector to compromise the residual generation process. Since the residual (R 1 ) is compromised, the detection results are the same irrespective of the anomaly detector. Similar results can be seen with a CUSUM detector in FIG. 18A. [00260] The study included an example case where no protecte d sensors were used (“case 1”). All the sensors defined in the attacker model in section are vulnerable to attacks. From the DMD, it is evident that all other attacks can be detected and isolated except for attacks A 1 and A 3 . For equations e1‐e13, there are seven attac k vectors, , and A 3 gets assigned with higher weights. Even though the attacks could be detected with residuals, they do not have protected sensors as def ined in Theorem 1. Thus, all the residuals could also be compromised and hence get assigned a higher weight. To derive the attack index as shown in equation 3 , we need to assign the de clared weights according to propositions 1 and 2 . Thus, [00261] [00262] As defined in assumption 1, an unprotected measuremen t is any sensor or actuator that can be attacked, and there exists a possibility of manipulating the value. In contrast, protected measurements cannot be attacked or manipulated. Protecting a measurement can be achieved in multiple ways, like c ryptography or encryption, and is mostly application specific. The sensor and the actuator dyn amics vary depending on the system and the manufacturer's configuration. Thus, there are mult iple configurations of sensor suite in the ALC system that OEMs implement based on the space, computational power, market value of the vehicle, etc. An advantage of protecting a measu rement is distinguishing between faults and attacks ‐ a protected measurement can be fault y but cannot be attacked. [00263] From the given sensor suite for the LKAS, this subs ection discusses finding the optimal sensors to protect. From Theorem 1 , for maximal security in attack detectability, it is required to protect the sensors of the highest block order for the given matching and use that protected sensor for a residua l generation. The order of generation of the TES depends on the sensor placement. All the po ssible matching for TES-1 is shown in FIG. 13. Thus, the sensors that could be protected to in crease the security index are vehicle velocity (V x ), vehicle lateral velocity (V y ), and change in yaw rate measurement . Since vehicle velocity is not a state in the LKAS, it is not th e best candidate for applying protection mechanisms. Similarly, by comparing all other possible matchings from TES 1‐10, the yaw rate measurement is the most optimal protected sensor beca use either the sensor or the derivative of the measurement occurs in the highest block order in most of the matching for TES 1‐10. Also, the residual generated by estimating the state could be used to compare with the protected measurement. So, for TES-1, matching 3 is the best sensor placement strategy. An example computational sequence is given in FIGS. 14A 14C. Thus, the residual, say generated with matching 3 and protected yaw rate mea surement, is a protected residual. The stealthy attack A 6 and A 7 that was undetected with residual R 1 – FIG. 15C is detected using the protected residual in FIG. 17C. FIG. 17B shows the residual under norm al unattacked operating conditions. Thus, this simulation supports t he claim in Theorem 1. Also, the protected residual works irrespective of the detection strategy. Similar results to the Chi-squared detector are observed with the CUSUM detector in FIG . 18B and 18C. [00264] For case 2, let us assume that the yaw rate sensor is a protected measurement that cannot be attacked. The structural m odel remains the same as the sensor might still be susceptible to faults. Hence the atta ck vector (A 4 ) could be generalized as an anomaly than an attack. So, similar to case 1, the two attack vectors are in the just‐determined part, and four attacks ( A 4 is not considered as an attack) in the over determined part. Also, similar to case‐1, 10 residu als can detect and isolate the attacks. Except for residual (R 7 ), all other residuals could be generated with a protected sensor or its derivative in the highest block order. Thus, we have nine protected residuals. Hence, the attack index from propositions 1,2 , theorem 1 , and the simulations shown in section VI‐C, is calculated to be: [ 00265] [00266] The attack vectors are added to the system based on assumption 1 . This is done by analyzing the behavioral model and using CAN DBC files to read the CAN for output measurements while manipulating the inputs to the sys tem. The severity of the attacks is established by identifying the location of vulnerabili ties in the system. With these potential attack vectors, we used the structural model to iden tify the safety‐critical attacks and how hard it is to perform a stealthy implementation. From the structural model, it was identified that the attacks on the just‐determined part are not detecta ble, while the attacks on the over‐ determined part are detectable due to redundancies in the system. Then, it was shown that even if the attacks are detectable with residuals, a n intelligent attacker can inject stealthy attack vectors that do not trigger the residual thre shold. Then, to improve the residual generation process and the security index of the sys tem, the example implementation introduces protected sensors. The criterion for select ing a sensor to protect to minimize the attack index (maximize security index) was established . For a sequential residual generation process, it was shown that the residual generated wi th a protected sensor in the highest block order is more secure in attack detectability. In the LKAS example, the attack index with the specified weights without protected sensors is 125 . Still, by just protecting one sensor, the attack index of the system was reduced to 43. The example implementation gives the system analyst freedom to choose the individual weights for the attacks and residuals. The weights can be chosen depending on the complexity of performing the attack using metrics like CVSS [35]. [00267] This example implementation of the present disclosure includes a novel attackability index for cyberphysical systems based on redundancy in the system and the computational sequence of residual generators. A non linear dynamic model of an automotive system with CAN as the network interface was conside red. The vulnerabilities in the system that are exploited due to improper network segmentati on, open network components, and sensors were classified as unprotected measurements in the system. These unprotected measurements were modeled as attack vectors to the d ynamic equations of the system. Then based on the sensor configurations and unknown variab les in the system, the redundant and non‐redundant parts were identified using canonical decomposition of the structural model. Then the attack index was derived based on the atta ck's location with respect to the redundant and non‐redundant parts. Then with the concept of protected sensors, the residuals generated from the redundant part were analyzed on its computa tional sequence and placement strategy of the protected sensors. If there were no protected sensors, the sensor placement strategies for residuals and the optimal sensor(s) to protect w ere suggested to increase the system's security guarantees. Then MIL simulations were perform ed to illustrate the effectiveness of the example implementation. [00268] Example 3: [00269] A study was performed of an example implementation i ncluding vulnerability analysis of Highly Automated Vehicular S ystems (HAVS) using a structural model. The analysis is performed based on the severity and detectability of attacks in the system. The study considers a grey box ‐ an unknown nonlinear dynamic model of the system. The study deciphers the dependency of input‐output constraints by analyzing the behavioral model developed by measuring the outputs while manipulating the inputs on the Controller Area Network (CAN). The example implementation can identify the vulnerabilities in the system that are exploited due to improper network segmentation (i mproper gateway implementation), open network components, and sensors and model them with the system dynamics as attack vectors. The example implementation can identify the redundant and non‐redundant parts of the system based on the unknown variables and sensor configuration. The example implementation analyze the security implications based on the placement of the attack vectors with respect to the redundant and nonredundant parts using canonical decomposition of the structural model. Model‐In‐Loop (MIL) simulations v erify and evaluate how the proposed analysis could be used to enhance automotive security . [00270] The example implementation includes anomaly detectors constructed using redundancy in the system using qualitative prop erties of greybox structural models. This vulnerability analysis represents the system as a beh avioral model and identifies the dependence of the inputs and outputs. Then based on the unknown variables in the model and the sensor placement strategy, redundancy in the syst em is determined. The potential vulnerabilities are then represented as attack vectors with respect to the system. If the attack vector lies on the redundant part, detection and iso lation are possible with residuals. If not, the attack remains stealthy and causes maximum damage to the system's performance. Thus, this work proposes a method to identify and visualize vul nerabilities and attack vectors with respect to the system model. The MIL‐simulation results sho w the impact of attacks on the Lane Keep Assist System (LKAS) identified using the proposed ap proach. [00271] FIG. 3 illustrates an example system model that can be used with a network layer to transmit sensor messages and control plant actuation. An attacker can compromise the system either by attacking the CAN to inject falsified sensor or actuator messages or by performing adversarial attacks on the sensors. [00272] The system model can include a grey‐box system tha t describes nonlinear dynamics: [00273] where is the state vector, is the control input, is the sensor measurement, and θ is the set of unknow n model parameters. Based on [13B], and [14B], let us assume that the nonlinear system is u niformly observable ‐ the functions f,g, and ℎ are smooth and invertible. Also, the parameter s et θ exists such that model defines the system. Under a special case (when the model is wel l‐defined), the linearized ‐ Linear Time‐ Invariant (LTI) version of the plant is given by where , and are the system, input, and output matrices respectively. [00274] The model parameters θ and the functions f,g, and ℎ are unknown it can be assume that the implementation knows the exis tence of parameters and states in the functions, hence a grey‐box approach. [00275] The attacker model is given by: [00276] [00277] where and are the actuator and sensor attack vectors . The compromised state of the system at time t ca n be linearized as . Where is the actuator attack signal injected by the attacker. Simi larly, is a compromised sensor measurement and in the attack injected. and are the actuator and sensor signals that have not b een compromised due to the attack. [00278] The structural model of the system analyzes the qual itative properties of the system to identify the analytically redundant par t [12B]. The non‐zero elements of the system are called the free parameters, and they are of main interest in the present study. Note that the exact relationship of the free parameters i s not required; just the knowledge of their existence is sufficient. Furthermore, let the study a ssumes that the input and measured output are known precisely. Thus, with the free parameters, the system's structure can be represented by a bipartite graph where are the set of nodes corresponding to the state, measurements, input, and attack vectors. T hese variables can be classified into known and unknowns . The bipartite graph can also be represented by a weighted graph where the weight of each edge corresponds to . The relationship of these variables in the system is represented by the set o f equations (or constraints) is an edge which links the equation to variable The matrix form of the bipartite graph can be represented as an adjacency matrix M (Structural Matr ix), a Boolean matrix with rows corresponding to E and columns to V and otherwise }. In the above definition, the differentiated variables are structurally different from the integrated variables. [00279] Definition 1:(Matching) Matching on a structural model M is a subset of Γ such that two projections of any edges in M are in jective. This indicates that any two edges in G do not share a common node. A matching is maximal if it contains the largest number of edges (maximum cardinality) and perfect if all the vertices are matched. Matching can be used to find the causal interpretation of the model and the Analy tically Redundant Relations (ARR) ‐ the relation E that is not involved in the complete mat ching. [00280] The motive of structural analysis is to identify mat chings in the system. If an unknown variable is matched with a constraint, th en it can be calculated from the constraint. If they can be matched in multiple ways, they contribute to redundancy that can be potentially used for abnormality detection. Based on the redundancy, the system can be divided into three sub‐models: under‐determined (no . of unknown variables > no. of constraints), just‐determined (no. of unknown variabl es = no. of constraints), and over‐ determined part (no. of unknown variables < no. o f constraints). An alternate way of representing the adjacency matrix is Dulmage‐Mendelso hn's (DM) decomposition (DMD) [15B]. DMD is obtained by rearranging the adjacency matrix in block triangular form and is a better way to visualize the categorized sub‐models in the system. The under‐determined part of the model is represented by with node sets , and the just‐determined or the observable part is represented by with node sets and . The over‐determined part (also observable) is represented by with node sets and . Attack vectors in the under‐ determined and just‐determined part of the system are not detectable. While Attack vectors in the overdetermined part of the system are detectable with the help of redundancies. [00281] Consider the system and attacks as shown in (1) and (2). From the part of the DMD, residuals can be generated using the un matched redundant constraints and can be checked for consistency. The structure of the residua l is the set of constraints ‐ monitorable sub‐graphs with which they are constructed. The mon itorable subgraphs are identified by finding the Minimal Structurally Overdetermined (MSO) set as defined in [16B]. [00282] Definition 2: (Proper Structurally Overdetermined (PSO) ) A non‐empty set of equations if . [00283] The PSO set is the testable subsystem, which may co ntain smaller subsystems ‐ MSO sets. [00284] Definition 3:(Minimal Structurally Overdetermined (MSO)) [00285] A PSO set is MSO set if no proper subset is a PS O set. [00286] MSO sets are used to find the minimal testable and monitorable subgraph in a system. [00287] Definition 4: Degree of structural redundancy is give n by [00288] Lemma 1: If E is a PSO set of equations with , then . [00289] Lemma 2: The set of equations E is an MSO set if and only if E is a PSO set and [00290] The proof Lemma 1 and Lemma 2 is given in [16B] b y using Euler's totient function definition [17B]. [00291] For each MSO set identified according to Lemma 2, a set of equations called the Test Equation Support (TES) can be formed to test for faults or attacks. A TES is minimal (MTES) if there exist no subsets that are T ES. Thus, MTES leads to the most optimal number of sequential residuals by eliminating unknown variables from the set of equations (parity‐space‐like approaches). [00292] Definition 5: (Residual Generator) A scalar variable R generated only from known variables (z) in the model M is the residual generator. The anomaly detector looks if the scalar value of the residual (usually a normalized v alue of residue R t ) is within the threshold limits under normal operating conditions. Ideally, it should satisfy [00293] An MTES set might involve multiple sensor measurement s and known parameters in the residual generation process. The ge nerated residue is actively monitored using a statistical anomaly detector. [00294] A system defined in (1) is vulnerable if there exis ts an attack vector that lies in the structurally under or just‐determined p art. The consequence of the attack is severe if there is a significant deviation of the state from its normal operating range. Ideally, is the unbounded condition for the attack sequence. [00295] Thus, the example implementation can analyze a given system to identify vulnerabilities that could potentially be exploited in to attacks. The impact of the attacks is derived from the DM decomposition of the system, and the complexity of performing the attacks is based on the implementation of anomaly de tectors (if any). The attacks on the under and just determined part of the system are not dete ctable and have severe consequences. [00296] The study of the example implementation included perf orming vulnerability analysis on structured grey‐box control systems. The under‐determined part of the system is not attackable as the nodes are not reach able but still susceptible to faults. A vertex is said to be reachable if there exists at least a ju st‐determined subgraph of G that has an invertible edge . [00297] Proposition 1: The system is most vulnerable if the measurements on the just‐determined part can be compromised. [00298] Proof: This is due to the presence of undetectable attack vectors from the sensors and actuators. The attack vector α i is not detectable due to the lack of residue s. From definitions 3 , 4, lemma 1 , and 2 : [00299] [00300] Hence residual generation (formation of TES) is not directly possible on , and any attack is not detectable. [00301] Anomaly detection on the just‐determined part is on ly possible if redundancy in the form of additional sensors or pred iction and estimation strategies is added to the system. The over‐determined part of the sys tem is attackable, but the attacks are detectable from the residues generated from MTES. To have an undetectable attack, the attack vector should satisfy the stealthy condition ‐ the attack vector should be within the threshold limits of the anomaly detector. Thus, the complexity of performing a successful attack is high, which leads to proposition 2. [00302] Proposition 2: The over‐determined part of the syst em with vulnerable sensors and actuators is more secure as residues can be designed to detect attacks. [00303] The system is attackable if it has vulnerable sensor s and actuators. However, to perform a stealthy attack, the attacker should inject attack vectors that should be within the threshold limits of the anomaly detector. Hence, here we show the condition for detectability and the existence of residues. Let us consider the transfer function representation of the general model: Thus, an attack is detectable if [00304] Rank Rank [00305] This satisfies the condition [18B] [19B] that there exists a transfer function Q(s) such that residue [00306] [00307] The residues capable of detecting the attack are sel ected from the MTES that satisfy the above criterion. Proposition 2 shows that vulnerable measurements in the system could lead to attacks. However, these attacks are detectable with residues, making the system overall less vulnerable. [00308] The vulnerability analysis is based on the structural model of the system. The structural matrices are qualitative properties and do not always consider the actual dynamical equations of the system. Thus, the analysis can be performed even with a realization of the system and not necessarily with exact system parameters. [00309] Thus, following the definition from C.1 [20B] and [2 1B], Theorem 1 can be formulated as: [00310] Theorem 1: The vulnerability analysis is generic and remains the same for any choice of free parameters (θ) in the system. [00311] Proof: For the scope of this proof, assume a linear ized version of the system (1). Let be a transfer function matrix. Here we only know th e structure of the polynomial matrix, the coefficients of the matrix are unknown. Let the generic‐rank (g‐rank) of the transfer function grank . From [22B], g‐rank (H) is the maximum matching in the bipartite graph G constructed from the polynomial matrix. For a given maximum matching, the bipartite graph G can be decomposed as under just and over‐ determined . [00312] For the under‐determined part , the subgraph contains at least two maximum matching of order and the sets of initial vertices do not coincide. T he rank full row rank. [00313] For the just‐determined part , the subgraph contains at least one maximum matching of order . The rank is invertible. [00314] For the over‐determined part , the subgraph contains at least two maximum matching of order and the sets of initial vertices do not coincide. T he rank full column rank. [00315] The DM decomposition of H is given by: [00316] Hence, Theorem 1 shows that DMD can be computed wit h just the input‐ out relation of the system (transfer function polynom ial matrix). Thus, for any choice of free parameters in system realization, the vulnerability an alysis performed using the structural model is generic. A qualitative property thus holds for all systems with the same structure and sign pattern. The structural analysis concerns zero a nd non‐zero elements in the parameters and not their exact values. [00317] The input‐out relation for automotive systems can b e obtained by varying the input parameters and measuring the output through CAN messages, and decoding them with CAN Database (DBC). This way, the example imple mentation can decipher which output measurements vary for different input parameters. [00318] The study shows that the example implementation can perform vulnerability analysis on a real‐world system. The study includes an Automated Lane Centering System (ALC). A grey‐box model of the lane keep a ssist system with vehicle dynamics, steering dynamics, and the communication network (CAN). Despite knowing the precise dynamics of LKAS [23B] [24B], the study considers the system as a grey box, and the input‐out relation of the grey‐box model was additionally verified on an actual vehicle. [00319] The system model, as shown in FIG. 9 uses an LKA controller (typically a Model Predictive Controller (MPC) [24B] or Proportiona l‐Integral‐Derivative (PID) controller [25B]) to actuate a DC motor connected to the steer ing column to steer the vehicle to the lane center. The LKAS module has three subsystems: (i) th e steering system ‐ steering column [e1‐ e4], steering rack [e8‐e10], (ii) the power assist system [e5‐e7], and (iii) the vehicle's lateral dynamics control system [e11e16]. The LKAS is impleme nted on an Electronic Control Unit (ECU) with a set of sensors to measure the steering torque, steering angle, vehicle lateral deviation, lateral acceleration, yaw rate, and vehicle speed. The general mechanical arrangement of LKAS and the dynamical vehicle model is the same as considered in [23B]. The dynamic equations of the LKAS module without driver inputs are given by:
[00320] [00321] The state vectors of the system are given by The input to the power steering module is the motor torque from the controller, and the output is the lateral deviation is the desired yaw rate given as disturbance input to avoid sudden maneuvers to enhance the user's comfort. [00322] The optimal control action to steer the vehicle back to the lane center is given by solving the quadratic optimization problem g iven in e18. Equation e19 (motor actuator) is the required torque calculated by the c ontroller that is applied on the motor. [00323] [00324] [00325] The sensor suite for the LKAS module is given by: [00326] [00327] The steering wheel torque (e23), wheel speed (e26), yaw rate (e27), and lateral acceleration (e28) sensors have been mandated by National Highway Traffic Safety Administration (NHTSA) for passenger vehicles since 20 12 [26B]. [00328] FIG. 19 illustrates a table of variable parameters o f an example lane keep assist system, used in the study of the example imp lementation. [00329] The study identifies the vulnerable measurements in t he system by analyzing the CAN DBC files [27B]. Hence an attack vector A i is added to the dynamic equation of the vulnerable measurement ‐ all the measurement s visible on the CAN that the LKA controller uses to compute steering torque. Also, the redundancy in the messages published on CAN is not accounted as ARR. The sensor and the ac tuator dynamics vary depending on the device and the manufacturer's configuration. There are multiple configurations of the sensor suite in the ALC system that OEMs implement based o n the space, computational power, and market value of the vehicle. The vulnerability analys is of LKAS across different OEMs can be similar as long the input‐output relations and syst em structure are similar. [00330] The LKAS calculates the required steering angle based on the sensor values on CAN, determines the required torque to be applied by the motor, and publishes the value on the CAN. The motor controller then actuates the motor to apply the required torque to keep the vehicle in the center of the lane. Thu s, the actuator attack A 1 manipulates the required torque, and incorrectly applied torque drives the vehicle away from the lane center. e20‐e28 are sensor dynamics where are the sensor attacks. Attacks A 2 and A 3 are physical‐world adversarial attacks on perception sens ors for lane detection as shown in [28B]. Other attacks are implemented by attacking and compro mising the CAN. [00331] The first step in analyzing the structural model of the system is to identify the known and unknown parameters (variables) in the system. The unknown are the quantities that are not measured. Hence from e1‐e28 , it is clear that the state vector X and the set are the unknown parameters. While the measurements fr om the sensors are the known and measured parameters Note that the parameter is unknown until it is measured using the sensor. [00332] For example, is unknown while from the torque sensor is known. The DM Decomposition of the LKAS is given in FIG. 10B. The dot in the DMD implies that the variable on X‐axis is related to the equation on Y‐axis. Thus, from the DM decomposition, it is evident that the attacks A 1 and A 3 in the just‐determined part are not detectab le and other attacks on the over‐determined part are detectable. The greyshaded part of the DMD in FIG. 10B denotes the equivalence class, and the attacks i n different equivalence classes can be isolated from each other with test equations (residue s). The attacks are detectable and isolable. The residues generated (TES) that can detect and isolate the attacks are given by the attack signature matrix 2000 in FI G. 20. The dots 2002 in the attack signature matrix 2000 represents the attacks in the Xaxis that the TES in Y‐axis can detect. For example, the TES-1 (Residue‐1) can detect attacks 8, 9, and 10. [00333] The LKAS is simulated in Matlab and Simulink to per form vulnerability analysis. The simulated system very closely resembles the LKAS from an actual vehicle. The attacks are injected on the sensors/ actuators in th e simulated environment, and residues were designed using the structural model of the system. F or the scope of this paper, only residual plots and analysis of TES‐1 (R 1 ) ,are shown. However, the analysis remains the same for all TES (TES 1‐27) shown in FIG. 20. [00334] The computation sequence 2004 for TES-1 is shown in FIG. 20 The simulations support propositions 1 and 2: FIG. 21A s hows the implementation of residue R 1 (TES-1) in the structurally over‐determined part und er normal unattacked operation. FIG. 21B shows the working of residue R 1 under attacks A 9 and A 10 . It is evident that the residue crosses the threshold multiple times. This could trigger an alarm to alert the vehicle user. FIG. 16 shows the implementation of attack A 1 in the simulation environment. FIG. 21C shows that the attack A 1 lies in the justdetermined part, and existing residues fail to detect the attack. Thus, the attacks A 1 and A 3 [28B] on the just‐determined part make the system extremely vulnerable, and the attack remains undetected, causing adverse sa fety violations. The attacks are still possible but much harder to implement stea lthily due to the presence of residues. [00335] The study of the example implementation includes vuln erability analysis using the structural model of a grey‐box (unknown nonlinear plant dynamics) HAV system. The example implementation establishes the severity of the attacks by identifying the location of vulnerability in the system. The example implementatio n can analyze the behavioral model and using CAN DBC files to read the CAN for output mea surements while manipulating the inputs to the system. The study categorized the variables and measurements as redundant (over‐ determined) and non‐redundant (just‐determined) part s and claim that attacks on the over‐ determined part can be detected and isolated. In con trast, attacks on the just‐determined part may not be detected without external observers. Thus, the example implementation can determine how vulnerable the overall system is by qu antitative measurement of the attacks that fall in the just and over‐determined parts. S ecurity guarantees can be established by moving the measurements from the just‐determined to the over‐determined part by adding redundancy in the form of additional sensors or nonl inear state estimators. [00336] The following patents, applications, and publications, as listed below and throughout this document, describes various application and systems that could be used in combination the exemplary system and are hereby incor porated by reference in their entirety herein. [00337] [1] M. Blanke, M. Staroswiecki, and N. Wu, "Concepts and methods in fault‐tolerant control," in Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148), vol. 4, 2001, pp. 2606‐2620 vol.4. [00338] [2] D. Düştegör, E. Frisk, V. Cocquempot, M. Krys ander, and M. Staroswiecki, "Structural analysis of fault isolability in the damadics benchmark," Control Engineering Practice, 2006. [00339] [3] J. Milošević, H. Sandberg, and K. H. Johansson , "A security index for actuators based on perfect undetectability: Properties and approximation," in 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2018, pp. 235‐241. [00340] [4] A. Khazraei, S. Hallyburton, Q. Gao, Y. Wang, a nd M. Pajic, "Learning‐ based vulnerability analysis of cyber‐physical system s," in 2022 ACM/IEEE 13th International Conference on Cyber‐Physical Systems (ICCPS). IEEE, 2022, pp. 259‐269. [00341] [5] H. Cam, P. Mouallem, Y. Mo, B. Sinopoli, and B . Nkrumah, "Modeling impact of attacks, recovery, and attackability conditi ons for situational awareness," in IEEE International Inter‐Disciplinary Conference on CogSIMA , 2014. [00342] [6] A. L. Dulmage and N. S. Mendelsohn, "Coverings of bipartite graphs," Canadian Journal of Mathematics, vol. 10, pp. 517‐5 34, 1958. [00343] [7] M. Zhang, P. Parsch, H. Hoffmann, and A. Masrur , "Analyzing can's timing under periodically authenticated encryption," in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2022, pp. 620‐623. [00344] [8] M. Krysander, J. Åslund, and M. Nyberg, "An ef ficient algorithm for finding minimal overconstrained subsystems for model‐ based diagnosis," IEEE Transactions on Systems, Man, and Cybernetics‐Part A: Systems and H umans, vol. 38, no. 1, pp. 197‐206, 2007. [00345] [9] D. Lehmer, "On euler's totient function," Bulleti n of the American Mathematical Society, vol. 38, no. 10, pp. 745‐751, 1932. [00346] [10] T. Sato, J. Shen, N. Wang, Y. Jia, X. Lin, a nd Q. A. Chen, "Dirty road can attack: Security of deep learning based automated lane centering under Physical‐World attack," in 30th USENIX Security Symposium (USENIX Se curity 21), 2021. [00347] [11] Z. El‐Rewini, K. Sadatsharan, D. F. Selvaraj, S. J. Plathottam, and P. Ranganathan, "Cybersecurity challenges in vehicular com munications," Vehicular Communications, 2020. [00348] [12] M. Nyberg and E. Frisk, "Residual generation fo r fault diagnosis of systems described by linear differential‐algebraic eq uations," IEEE Transactions on Automatic Control, vol. 51, no. 12, pp. 1995‐2000, 2006. [00349] [13] M. Nyberg, "Criterions for detectability and str ong detectability of faults in linear systems," IFAC Proceedings Volumes, vol. 33, no. 11, pp. 617‐622, 2000. [00350] [14] S. Sundaram, "Fault‐tolerant and secure control systems," University of Waterloo, Lecture Notes, 2012. [00351] [15] S. Gracy, J. Milošević, and H. Sandberg, "Act uator security index for structured systems," in 2020 American Control Conferen ce (ACC). IEEE, 2020, pp. 2993‐2998. [00352] [16] J. Van der Woude, "The generic dimension of a minimal realization of an ar system," Mathematics of Control, Signals and S ystems, vol. 8, no. 1, pp. 50‐64, 1995. [00353] [17] S. Kamat, IFAC‐PapersOnLine, vol. 53, no. 1, pp. 176‐182, 2020. [00354] [18] R. Marino, S. Scalzi, G. Orlando, and M. Netto , "A nested pid steering control for lane keeping in vision based autonomous vehicles," in 2009 American Control Conference. IEEE, 2009, pp. 2885‐2890. [00355] [19] X. Li, X.‐P. Zhao, and J. Chen, "Controller design for electric power steering system using ts fuzzy model approach," Inter national Journal of Automation and Computing, vol. 6, no. 2, pp. 198‐203, 2009. [00356] [20] Commaai, "Opendbc." [Online]. Available: https://github.com/commaai/opendbc [00357] [1A] A. Greenberg, "Hackers remotely kill a jeep on the highway‐with me in it," Wired, vol. 7, no. 2, pp. 21‐22, 2015. [00358] [2A] S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham, S. Savage, K. Koscher, A. Czeskis, F. Roesner, and T. Kohno, "Comprehensive experimental analyses of automotive attack surfaces," in 20th USEN IX security symposium (USENIX Security 11), 2011. [00359] [3A] V. Renganathan, E. Yurtsever, Q. Ahmed, and A. Yener, "Valet attack on privacy: a cybersecurity threat in automotive blue tooth infotainment systems," Cybersecurity, vol. 5, no. 1, pp. 1‐16, 2022. [00360] [4A] Y.‐C. Chang, L.‐R. Huang, H.‐C. Liu, C.‐ J. Yang, and C.‐T. Chiu, "Assessing automotive functional safety microprocessor with iso 26262 hardware requirements," in Technical papers of 2014 internation al symposium on VLSI design, automation and test. IEEE, 2014, pp. 1െ 4. [00361] [5A] M. Blanke, M. Staroswiecki, and N. Wu, "Concept s and methods in fault‐tolerant control," in Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148), vol. 4, 2001, pp. 2606‐2620 vol.4. [00362] [6A] D. Düştegör, E. Frisk, V. Cocquempot, M. Kry sander, and M. Staroswiecki, "Structural analysis of fault isolability in the damadics benchmark," Control Engineering Practice, 2006. [00363] [7A] J. Milošević, H. Sandberg, and K. H. Johansso n, "A security index for actuators based on perfect undetectability: Properties and approximation," in 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2018, pp. 235‐241. [00364] [8A] A. Khazraei, S. Hallyburton, Q. Gao, Y. Wang, and M. Pajic, "Learning‐ based vulnerability analysis of cyber‐physical system s," in 2022 ACM/IEEE 13th International Conference on Cyber‐Physical Systems (ICCPS). IEEE, 2022, pp. 259‐269. [9] S. M. Dibaji, M. Pirani, D. B. Flamholz, A. M. Annaswamy, K. H. Joha nsson, and A. Chakrabortty, "A systems and control perspective of cps security," Annual reviews in control, vol. 47, pp. 394‐411, 2019. [00365] [10A] S. Weerakkody, X. Liu, S. H. Son, and B. Sin opoli, "A graph‐theoretic characterization of perfect attackability for secure d esign of distributed control systems," IEEE Transactions on Control of Network Systems, vol. 4, no. 1, pp. 60‐70, 2016. [00366] [11A] J. Milošević, A. Teixeira, K. H. Johansson, and H. Sandberg, "Actuator security indices based on perfect undetectab ility: Computation, robustness, and sensor placement," IEEE Transactions on Automatic Cont rol, vol. 65 , no. 9, pp. 3816‐3831, 2020. [00367] [12A] S. Gracy, J. Milošević, and H. Sandberg, "Se curity index based on perfectly undetectable attacks: Graph‐theoretic condit ions," Automatica, vol. 134, p. 109925, 2021. [00368] [13A] K. Zhang, C. Keliris, M. M. Polycarpou, and T . Parisini, "Detecting stealthy integrity attacks in a class of nonlinear c yber‐physical systems: A backward‐in‐time approach," Automatica, vol. 141, p. 110262, 2022. [00369] [14A] Y. Mo and B. Sinopoli, "Secure control against replay attacks," in 2009 47th annual Allerton conference on communication, control, and computing (Allerton). IEEE, 2009, pp. 911‐918. [00370] [15A] A. Teixeira, I. Shames, H. Sandberg, and K. H . Johansson, "Revealing stealthy attacks in control systems," in 2012 50th A nnual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2012, pp. 1806‐1813. [00371] [16A] A. Barboni, H. Rezaee, F. Boem, and T. Parisi ni, "Detection of covert cyber‐attacks in interconnected systems: A distribute d model‐based approach," IEEE Transactions on Automatic Control, vol. 65, no. 9, p p. 3728‐3741, 2020 [00372] [17A] V. Renganathan and Q. Ahmed, "Vulnerability ana lysis of highly automated vehicular systems using structural redundancy ," in Accepted for 2023 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) , 2023. [00373] [18A] J. Kim, C. Lee, H. Shim, Y. Eun, and J. H. Seo, "Detection of sensor attack and resilient state estimation for uniformly o bservable nonlinear systems having redundant sensors," IEEE Transactions on Automatic Con trol, vol. 64, no. 3, pp. 1162‐1169, 2018. [00374] [19A] H. Shim, "A passivity‐based nonlinear observer and a semi‐global separation principle," Ph.D. dissertation, Seoul Nation al University, 2000. [00375] [20A] M. Zhang, P. Parsch, H. Hoffmann, and A. Masr ur, "Analyzing can's timing under periodically authenticated encryption," in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2022, pp. 620‐623. [00376] [21A] M. Krysander, J. Åslund, and M. Nyberg, "An efficient algorithm for finding minimal overconstrained subsystems for model‐ based diagnosis," IEEE Transactions on Systems, Man, and Cybernetics‐Part A: Systems and H umans, vol. 38, no. 1, pp. 197‐206, 2007. [00377] [22A] D. Lehmer, "On euler's totient function," Bulle tin of the American Mathematical Society, vol. 38, no. 10, pp. 745‐751, 1932. [00378] [23A] M. Nyberg and E. Frisk, "Residual generation f or fault diagnosis of systems described by linear differential‐algebraic eq uations," IEEE Transactions on Automatic Control, vol. 51, no. 12, pp. 1995‐2000, 2006. [00379] [24A] E. Frisk, "Residual generation for fault diagno sis," Ph.D. dissertation, Linköpings universitet, 2001. [00380] [25A] M. Krysander and E. Frisk, "Sensor placement f or fault diagnosis," IEEE Transactions on Systems, Man, and Cybernetics‐P art A: Systems and Humans, vol. 38, no. 6, pp. 1398‐1410, 2008. [00381] [26A] S. Kamat, "Model predictive control approaches for lane keeping of vehicle," IFAC‐PapersOnLine, vol. 53, no. 1, pp. 17 6‐182, 2020. [00382] [27A] R. Marino, S. Scalzi, G. Orlando, and M. Nett o, "A nested pid steering control for lane keeping in vision based au tonomous vehicles," in 2009 American Control Conference. IEEE, 2009, pp. 2885‐2890. [00383] [28A] X. Li, X.‐P. Zhao, and J. Chen, "Controller design for electric power steering system using ts fuzzy model approach," Inter national Journal of Automation and Computing, vol. 6, no. 2, pp. 198‐203, 2009. [00384] [29A] MathWorks, "Vehicle body 3dof ‐ 3dof rigid v ehicle body to calculate longitudinal, lateral, and yaw motion," 2022 [Online]. Available: https://www.mathworks.com/help/vdynblks/ref/ vehiclebody3do f.html#d124e115334 [00385] [30A] C. Becker, L. Yount, S. Rozen‐Levy, J. Brewe r et al., "Functional safety assessment of an automated lane centering syst em," United States. Department of Transportation. National Highway Traffic Safety ..., T ech. Rep., 2018. [00386] [31A] Commaai, "Opendbc." [Online]. Available: https://github.com/commaai/ opendbc [00387] [32A] T. Sato, J. Shen, N. Wang, Y. Jia, X. Lin, and Q. A. Chen, "Dirty road can attack: Security of deep learning based automated lane centering under {Physical‐World ^ attack," in 30th USENIX Security Symposium (USENIX Se curity 21), 2021. [00388] [33A] "Highway lane following with roadrunner scenario ." [Online]. Available: https://www.mathworks.com/help/driving/ug/ high way‐lane‐following‐with‐ roadrunner‐scenario.html [00389] [34A] E. Frisk, M. Krysander, and D. Jung, "A toolb ox for analysis and design of model based diagnosis systems for large sc ale models," IFAC‐PapersOnLine, vol. 50, no. 1, pp. 3287‐3293, 2017, 20th IFAC World Congre ss. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S240589631730872 8 [00390] [35A] P. Mell, K. Scarfone, and S. Romanosky, "Commo n vulnerability scoring system," IEEE Security & Privacy, vol. 4, no. 6, pp. 85‐89, 2006. [00391] [36A] X. Chen and S. Sankaranarayanan, "Decomposed re achability analysis for nonlinear systems," in 2016 IEEE Real‐ Time Systems Symposium (RTSS). IEEE, 2016, pp. 13‐24. [00392] [37A] J. Maidens and M. Arcak, "Reachability analysis of nonlinear systems using matrix measures," IEEE Transactions on Automatic Control, vol. 60 , no. 1 , pp. 265െ 270,2014. [00393] [1B] A. Greenberg, "Hackers remotely kill a jeep on the highway—with me in it," Wired, vol. 7, no. 2, pp. 21‐22, 2015 . [00394] [2B] S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham, S. Savage, K. Koscher, A. Czeskis, F. Roesner, and T. Kohno, "Comprehensive experimental analyses of automotive attack surfaces," in 20th USEN IX security symposium (USENIX Security 11), 2011. [00395] [3B] V. Renganathan, E. Yurtsever, Q. Ahmed, and A. Yener, "Valet attack on privacy: a cybersecurity threat in automotive blue tooth infotainment systems," Cybersecurity, vol. 5, no. 1, pp. 1‐16, 2022. [00396] [4B] C. Schmittner, Z. Ma, C. Reyes, O. Dillinger, and P. Puschner, "Using sae j3061 for automotive security requirement engineer ing," in International Conference on Computer Safety, Reliability, and Security. Springer, 2016, pp. 157‐170. [00397] [5B] G. Macher, C. Schmittner, O. Veledar, and E. B renner, "Iso/sae dis 21434 automotive cybersecurity standard‐in a nutshell ," in International Conference on Computer Safety, Reliability, and Security. Springer, 2020, pp. 123‐135. [00398] [6B] C. Schmittner, "Automotive cybersecurity auditing and assessmentpresenting the iso pas 5112," in European C onference on Software Process Improvement. Springer, 2022, pp. 521‐529. [00399] [7B] O. Henniger, A. Ruddle, H. Seudié, B. Weyl, M . Wolf, and T. Wollinger, "Securing vehicular on‐board it systems: The evita project," in VDI/VW Automotive Security Conference, 2009, p. 41. [00400] [8B] G. Macher, H. Sporer, R. Berlach, E. Armengaud, and C. Kreiner, "Sahara: a security‐aware hazard and risk analysis method," in 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2 015, pp. 621‐624. [00401] [9B] P. Sharma Oruganti, P. Naghizadeh, and Q. Ahmed , "The impact of network design interventions on cps security," in 202 1 60th IEEE Conference on Decision and Control (CDC), 2021, pp. 3486‐3492. [00402] [10B] A. Khazraei, S. Hallyburton, Q. Gao, Y. Wang, and M. Pajic "Learning‐based vulnerability analysis of cyber‐phys ical systems," in 2022 ACM/IEEE 13th International Conference on Cyber‐Physical Systems (I CCPS). IEEE, 2022, pp. 259‐269. [00403] [11B] M. Blanke, M. Staroswiecki, and N. Wu, "Concep ts and methods in fault‐tolerant control," in Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148), vol. 4, 2001, pp. 2606‐2620 vol.4. [00404] [12B] D. Düştegör, E. Frisk, V. Cocquempot, M. Kr ysander, and M. Staroswiecki, "Structural analysis of fault isolability in the damadics benchmark," Control Engineering Practice, 2006. [00405] [13B] J. Kim, C. Lee, H. Shim, Y. Eun, and J. H. Seo, "Detection of sensor attack and resilient state estimation for uniformly o bservable nonlinear systems having redundant sensors," IEEE Transactions on Automatic Con trol, vol. 64, no. 3, pp. 1162‐1169, 2018. [00406] [14B] H. Shim, "A passivity‐based nonlinear observer and a semi‐global separation principle," Ph.D. dissertation, Seoul Nation al University, 2000. [00407] [15B] A. L. Dulmage and N. S. Mendelsohn, "Coverings of bipartite graphs," Canadian Journal of Mathematics, vol. 10, pp . 517‐534, 1958. [00408] [16B] M. Krysander, J. Åslund, and M. Nyberg, "An efficient algorithm for finding minimal overconstrained subsystems for model‐ based diagnosis," IEEE Transactions on Systems, Man, and Cybernetics‐Part A: Systems and H umans, vol. 38, no. 1, pp. 197‐206, 2007. [00409] [17B] D. Lehmer, "On euler's totient function," Bulle tin of the American Mathematical Society, vol. 38, no. 10, pp. 745‐751, 1932. [00410] [18B] M. Nyberg and E. Frisk, "Residual generation f or fault diagnosis of systems described by linear differential‐algebraic eq uations," IEEE Transactions on Automatic Control, vol. 51, no. 12, pp. 1995‐2000, 2006. [00411] [19B] M. Nyberg, "Criterions for detectability and st rong detectability of faults in linear systems," IFAC Proceedings Volumes, vol. 33, no. 11, pp. 617622,2000 . [00412] [20B] S. Sundaram, "Fault‐tolerant and secure contro l systems," University of Waterloo, Lecture Notes, 2012. [00413] [21B] S. Gracy, J. Milošević, and H. Sandberg, "Ac tuator security index for structured systems," in 2020 American Control Conferen ce (ACC). IEEE, 2020, pp. 2993‐2998. [00414] [22B] J. Van der Woude, "The generic dimension of a minimal realization of an ar system," Mathematics of Control, Signals an d Systems, vol. 8, no. 1, pp. 50‐64, 1995. [00415] [23B] X. Li, X.‐P. Zhao, and J. Chen, "Controller design for electric power steering system using ts fuzzy model approach," Inter national Journal of Automation and Computing, vol. 6, no. 2, pp. 198‐203, 2009. [00416] [24B] S. Kamat, "Model predictive control approaches for lane keeping of vehicle," IFAC‐PapersOnLine, vol. 53, no. 1, pp. 17 6‐182, 2020. [00417] [25B] R. Marino, S. Scalzi, G. Orlando, and M. Nett o, "A nested pid steering control for lane keeping in vision based au tonomous vehicles," in 2009 American Control Conference. IEEE, 2009, pp. 2885‐2890. [00418] [26B] C. Becker, L. Yount, S. Rozen‐Levy, J. Brewe r et al., "Functional safety assessment of an automated lane centering syst em," United States. Department of Transportation. National Highway Traffic Safety ..., T ech. Rep., 2018. [00419] [27B] Commaai, "Opendbc." [Online]. Available: https://github.com/commaai/opendbc [00420] [28B] T. Sato, J. Shen, N. Wang, Y. Jia, X. Lin, and Q. A. Chen, "Dirty road can attack: Security of deep learning based automated lane centering under ^ Physical‐World ^ attack," in 30th USENIX Security Symposium (USENIX Se curity 21), 2021.