Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
POSITION DETERMINATION OF A VEHICLE USING IMAGE SEGMENTATIONS
Document Type and Number:
WIPO Patent Application WO/2023/222671
Kind Code:
A1
Abstract:
The invention relates to a method for determining a position of a vehicle is provided. The method comprises the following steps: In a first step, a reference map is provided. The reference map comprises segmentations of a reference image with landmarks. In a second step, a measurement image of a vehicle environment are captured. In a third step, segmentations of the measurement image and neighborhood graphs are determined to obtain a measurement map, wherein a segmentation is represented by a vertex and where a neighborhood graph comprises the vertex and edges containing information to identify neighboring vertices of the vertex. In a fourth step, segmentations of the reference image are compared with the segmentations represented by the vertices of the measurement image and the neighborhood graphs and segmentations contained in the reference image and in measurement image are determined. In a fifth step, estimating the vehicle's position with reference to the reference map during its movement along a road based on a result of the comparison is carried out.

Inventors:
KANG QIYU (SG)
SHE RUI (SG)
TAY WEE PENG (SG)
NAVARRO NAVARRO DIEGO (SG)
KHURANA RITESH (SG)
HARTMANNSGRUBER ANDREAS (SG)
GUAN YONG LIANG (SG)
Application Number:
PCT/EP2023/063085
Publication Date:
November 23, 2023
Filing Date:
May 16, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONTINENTAL AUTOMOTIVE TECH GMBH (DE)
UNIV NANYANG TECH (SG)
International Classes:
G06N3/02; G01C21/30; G06T7/10; G06T7/70; G06V20/56
Foreign References:
US20180336421A12018-11-22
EP3904831A12021-11-03
US20220101600A12022-03-31
Other References:
CHEN: "Rethinking atrous convolution for semantic image segmentation", ARXIV: 1706.05587, 2017
CORDTS: "The cityscapes dataset for semantic urban scene understanding", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2016, pages 3213 - 3223, XP033021503, DOI: 10.1109/CVPR.2016.350
DOSOVITSKIY: "CARLA: An open urban driving simulator", PROCEEDINGS OF THE 1ST ANNUAL CONFERENCE ON ROBOT LEARNING, 2017, pages 1 - 16
Attorney, Agent or Firm:
CONTINENTAL CORPORATION (DE)
Download PDF:
Claims:
CLAIMS

1. Method (100) for determining a position of a vehicle, wherein the method (100) comprises the steps: providing a reference map 1802, the reference map 1802 comprising segmentations of a reference image with landmarks; capturing a measurement image 1804 of a vehicle environment; determining segmentations of the measurement image 1804 and neighborhood graphs, where a segmentation is represented by a vertex and where a neighborhood graph comprises the vertex and edges containing information to identify neighboring vertices of the vertex, to obtain a measurement map 1806; comparing segmentations of the reference map 1802 with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs and determining segmentations contained in the reference map 1802 and in the measurement image 1806; and estimating the vehicle’s position with reference to the reference map 1802 during its movement along a road based on a result of the comparison.

2. Method (100) according to claim 1, wherein the step comparing segmentations of the reference map 1802 with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs comprises the step: performing a rough localization to determine a road partition, which the vehicle is currently in, and selecting a reference image from the reference map 1802 that is related to the road partition.

3. Method (100) according to claim 1 or 2, wherein the step comparing segmentations of the reference map with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs comprises selecting an object from a set of objects contained in an image; wherein a segmentation represents an object.

4. Method (100) according to any of the previous claims, wherein the step comparing segmentations of the reference image with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs comprises: capturing real-time LiDAR data of the vehicle environment, extracting features from a segmentation of a selected object by a first Resnet, and extracting features from the LiDAR data of the selected object by a first PointNet and providing the extracted features to a common PointNet.

5. Method (100) according to claim 4, wherein the step of comparing segmentations of the reference map 1802 with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs comprises additionally: extracting features from neighbor segmentations of the segmentation of the selected object and providing the extracted features to a GAT, extracting features from LiDAR points cloud of a neighboring object of the selected object and providing the extracted features to the GAT, and describing the extracted features containing spatial information, and providing the described reference image features to the common PointNet.

6. Method (100) according to claim 5, wherein the steps of claims 3 to 5 are performed for the reference image and the measurement image 1804; wherein the method further comprises the step: concatenating the extracted features from the first ResNet, the first PointNet and the GAT and determining a similarity between the features of the object of the reference image and the features of the object of the measurement image 1804.

7. Method (100) according to claim 6, wherein the step determining a similarity between the features of the object of the landmark map network part and the features of the object of the measurement map part comprises predicting labels by a Multi-Layer Perceptron (MLP) and calculating a loss function by calculating a cross entropy.

8. Method (100) according to any of the previous claims, wherein the step providing a reference map 1802, the reference map 1802comprising segmentations with landmarks, comprises the sub-steps: capturing LIDAR data points and a reference image along a road for a road partition; mapping the LIDAR data points to the reference image; determining objects on the reference image and determining landmark segmentations from the reference image using a semantic segmentation neural network; and constructing a graph topological landmark map containing vertices corresponding each to a segmentation and edges, wherein an edge identifies neighboring vertices of a vertex.

9. Computer program element for determining a position of a vehicle, which, when running on a processing unit, causes the processing unit to carry out any of the methods as defined in claims 1 to 8.

10. Computer program element for determining matching landmark obj ects, the computer program element comprising: a landmark map network part and measurement map part; each of the landmark map network part and measurement map part comprising: an object selection module configured to select an object from a set of objects contained in an image; a Resnet configured to extract features from a first segmentation of the selected object and providing the extracted features to a common PointNet, a PointNet configured to extract features from a first LiDAR point cloud of the selected object and providing the extracted features to the common PointNet, a second Resnet configured to extract features from neighbor segmentations of the first segmentation and providing the extracted features to a GAT, a second PointNet configured to extract features from LiDAR points cloud of the neighboring object of the selected object and providing the extracted features to a GAT, the GAT, configured to describe the extracted features containing spatial information, and to provide the described landmark map features to the common PointNet; the common PointNet configured to concatenate the extracted features from the first ResNet, the first PointNet and the GAT and to determine a similarity between the features of the object of the landmark map network part and the features of the object of the measurement map part.

11. Computer readable medium on which a computer program element according to claim 9 or 10 is stored.

Description:
POSITION DETERMINATION OF A VEHICLE USING IMAGE SEGMENTATIONS

TECHNICAL FIELD

[0001] Various embodiments relate to methods for determining a position of a vehicle, to a computer program element for determining a position of a vehicle and to a computer program element for determining matching landmark objects. Further, the invention relates to a computer readable medium.

BACKGROUND

[0002] In autonomous driving, a precisely estimated vehicle position plays a crucial role in the perception, planning, and control functional systems in the autonomous vehicle to perform the driving decisions and actions effectively. The vehicle location is estimated with respect to a global coordinate system, e.g. Earth-centred Inertial (ECI) coordinate system, Earth-centred Earth-fixed (ECEF) coordinate system and Universal Transverse Mercator (UTM) coordinate systems. In structured or open environments, Global Navigation Satellite Systems (GNSS), such as GPS and GLONASS, usually provide sufficiently accurate localization results. More accurate localization for autonomous vehicles in unreliable GNSS scenarios, e.g. urban or tunnel areas where satellite signals are weakened or blocked, is required. Although more advanced GNSS technology like Differential GPS (DGPS) and Real Time Kinematic (RTK) may mitigate the issue by improving the localization accuracy compared to the conventional GPS, they induce higher cost and still suffer from signal availability due to limited signal coverage. Therefore, GNSS cannot be used alone and is often integrated with other sensors in the context of autonomous driving. The inertial navigation system in the autonomous vehicle typically includes Inertial Motion Units (IMU) sensor, wheel odometry sensor, and GPS. The linear accelerations and vehicle angular velocities measured from accelerometers and gyroscopes in IMU, together with the speed and turning measurements from wheel odometry sensors, can be used to estimate the vehicle position relative to its initial position from path integration known as Dead Reckoning technique. This trivial method has low cost in vehicle localization system but suffers from accumulated errors even with advanced algorithms. Besides, the magnitude of the localization errors causes inadequacy for autonomous vehicles. Modern vehicle localization systems commonly depend on additional on-board sensors, like cameras, LiDAR, or radar sensors which are also used for other functions of the autonomous vehicle, e.g. Object Detection and Response. Data fusion, the process of combing information from multi-modal sensory data to provide a robust and complete description for the surrounding environment, is therefore required to perform the localization task with less uncertainty and better accuracy results compared to the case when those sources are used individually. In the literature, filtering methods like the basic Kalman filter and its extensions like Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Sigma-Point Kalman Filters (SPKF) are commonly used. The recursive estimation process allows the probabilistic descriptions of observations from different sensor models to be included in the Bayes update. Furthermore, quantitative evaluation for the role of each sensor towards the overall performance of the system is possible from the consistent use of statistical measures of uncertainty. In general, complex image data from camera sensors or the point clouds data from LASER scanners are inefficient to be directly utilized in the filtering methods even though they provide rich information. Feature extraction techniques are first employed to extract useful information from the raw data. Traditionally, features of interest include simple point features such as comers, edges and blobs from methods like Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Features From Accelerated Segment Test (FAST) and Oriented Fast and Rotated Brief (ORB). In contrast, in deep learning approaches, neural networks with different architectures are applied directly to images or point clouds to exact features of interest to regress the vehicle movement. In localization, a reference global or local map need to be first defined. The reference landmark map stores the detected landmarks during the data collection phase and can be categorized into two types: the grid maps and the topological maps. A grid map is presented by an array of structured cells (e.g. the pixels in an image) where each cell represents a region of the environment and contains features of the landmarks therein. In contrast, in a topological map, the landmarks together with the extracted feature servers as nodes in a graph and the adjacency information reflects the geometric relations between each node. In an existing solution, a detailed grid map is first acquired using a vehicle equipped inertial navigation system and multiple laser range finders, and then a vehicle-mounted LiDAR sensor is used to localize the vehicle relative to the obtained map using a particle filter. In this work, the map is a fixed representation of the environment at each time instance. To achieve robust localization, maps are regarded as probability distributions over environments in each cell. The accuracy of these methods exceeds that of GPS-based method by over an order of magnitude.

[0003] In another existing solution, a LiDAR is used to obtain a pre-mapped environment, from which landmarks are extracted. When a vehicle transverses the environment, landmarks extracted from vehicle-mounted sensors are used to associate with the LiDAR landmarks and further obtain the vehicle locations. LiDAR is widely used in many other works and show great capability in measuring the ranges of targets in the environment. However, it is weak in recognizing targets, which is one of the strong points of computer vision. Thus, in many other works, cameras are adopted to localize vehicles. In another existing solution, a single monocular camera is used to conduct ego localization. The camera image is used to estimate the ego position relative to a visual map previously computed. However, cameras cannot provide high-quality range information and their performances are influenced by light and weather. Hence, both LiDAR and RGB-Depth camera are used to do localization by incorporating visual tracking algorithms, depth information of the structured light sensor, and a low-level vision-LiDAR fusion algorithm. IMU, camera and LiDAR may also be fused to realize three-dimensional localization. In some works, the measurements need not to be explicitly associated with the landmarks stored in the map. However, in many other works, data association (matching), the process of associating a measurement or feature when a vehicle transverses the environment to its corresponding previously extracted feature, is important. Pearson product-moment correlation may be used to perform the data association. In visual images, Sum of Square Differences (SSD) and Normalized Cross Correlation (NCC) are traditional similarity measures that use the intensities differences between corresponding pixels in two image patches.

SUMMARY

[0004] According to a first aspect, a method for determining a position of a vehicle is provided. The method comprises the following steps: In a first step, a reference map is provided. The reference map comprises segmentations of a reference image with landmarks. In a second step, a measurement image of a vehicle environment are captured. In a third step, segmentations of the measurement image and neighborhood graphs are determined to obtain a measurement map, wherein a segmentation is represented by a vertex and where a neighborhood graph comprises the vertex and edges containing information to identify neighboring vertices of the vertex. In a fourth step, segmentations of the reference image are compared with the segmentations represented by the vertices of the measurement image and the neighborhood graphs and segmentations contained in the reference image and in measurement image are determined. In a fifth step, estimating the vehicle’s position with reference to the reference map during its movement along a road based on a result of the comparison is carried out.

[0005] According to a second aspect, a computer program element for determining a position of a vehicle is provided. The computer program element, when running on a processing unit, causes the processing unit to carry out the abovementioned method for determining a position of a vehicle.

[0006] According to a third aspect, a computer program element for determining matching landmark objects is provided. The computer program element includes: a landmark map network part and measurement map part; each of the landmark map network part and measurement map part comprising: an object selection module configured to select an object from a set of objects contained in an image; a Resnet configured to extract features from a first segmentation of the selected object and providing the extracted features to a common PointNet, a PointNet configured to extract features from a first LiDAR point cloud of the selected object and providing the extracted features to the common PointNet, a second Resnet configured to extract features from neighbor segmentations of the first segmentation and providing the extracted features to a GAT, a second PointNet configured to extract features from LiDAR points cloud of the neighboring object of the selected object and providing the extracted features to a GAT, the GAT, configured to describe the extracted features containing spatial information, and to provide the described landmark map features to the common PointNet; the common PointNet configured to concatenate the extracted features from the first ResNet, the first PointNet and the GAT and to determine a similarity between the features of the object of the landmark map network part and the features of the object of the measurement map part.

[0007] According to a fourth aspect, a computer readable medium on which the abovementioned computer program element is stored, is provided.

[0008] Additional features for advantageous embodiments are provided in the dependent claims. BRIEF DESCRIPTION OF THE DRAWINGS

[0009] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments are described with reference to the following drawings, in which:

[0010] FIG. 1 shows an example of the three coordinate systems in the localization framework, according to various embodiments.

[0011] FIG. 2 shows an example of a localization coordinate system.

[0012] FIG. 3 shows an example of a sensor setup on a probe vehicle according to various embodiments.

[0013] FIG. 4 shows an example of a sensor setup on a probe vehicle according to various embodiments.

[0014] FIGS. 5 A and 5B show an example of images captured in a time step, according to various embodiments.

[0015] FIG. 6 shows an example of segmentations obtained based on the camera image of FIG. 5A.

[0016] FIG. 7 shows an example of a landmark map.

[0017] FIG. 8 shows a graph topological measurement map constructed at time t and examples of vertices in the measurement graph.

[0018] FIG. 9 shows a rough localization obtained using GPS data.

[0019] FIG. 10 shows a process flow diagram of a Landmark Objects Matching Neural Network (LOMNN) according to various embodiments.

[0020] FIG. 11 shows graphs that measure the performance of the LOMNN.

[0021] FIG. 12 shows a process flow diagram of a Localization Neural Network according to various embodiments.

DESCRIPTION

[0022] Embodiments described below in context of the devices are analogously valid for the respective methods, and vice versa. Furthermore, it will be understood that the embodiments described below may be combined, for example, a part of one embodiment may be combined with a part of another embodiment. [0023] It will be understood that any property described herein for a specific device may also hold for any device described herein. It will be understood that any property described herein for a specific method may also hold for any method described herein. Furthermore, it will be understood that for any device or method described herein, not necessarily all the components or steps described must be enclosed in the device or method, but only some (but not all) components or steps may be enclosed.

[0024] The described embodiments similarly pertain to the method for determining a position of a vehicle, to the computer program element for determining a position of a vehicle, to the computer program element for determining matching landmark objects, and the computer readable medium. Synergetic effects may arise from different combinations of the embodiments although they might not be described in detail.

[0025] In this context, the device as described in this description may include a memory which is for example used in the processing carried out in the device. A memory used in the embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).

[0026] In order that the invention may be readily understood and put into practical effect, various embodiments will now be described by way of examples and not limitations, and with reference to the figures.

[0027] High-accuracy vehicle self-localization is essential for path planning and vehicle safety in autonomous driving. As such, there may be a desire to improve position determination for autonomous driving.

[0028] According to various embodiments, a method for determining a position of a vehicle is provided. The method for determining position of the vehicle may also be referred herein as the method 100, which is illustrated as a flow diagram in FIG. 17. The vehicle may also be referred herein as the ego vehicle. The vehicle may be an autonomous vehicle.

[0029] The method 100 may include providing a reference map that includes landmarks from the environment. These landmarks may be useful for the vehicle to localize itself. The map may be generated off-line using static roadside objects such as traffic lights, traffic signs and poles, and these objects are specifically organized in a graph topology for calibration. The reference map may be also referred herein as calibration landmark map.

[0030] The method 100 may be performed by a localization framework that employs deep learning techniques to perform automatic feature extraction from sensors’ measurements. The localization framework may include a Convolutional Neural Network (CNN) based network to extract features from visual images captured by the vehicle’s camera and may include a Graph Convolutional Network (GCN) to integrate measurements from other sensors onboard the vehicle, e.g. LiDAR. The localization framework may estimate the vehicle’s location by comparing the extracted features from the equipped sensors’ real-time measurements and the calibration landmark map.

[0031] The method 100 may include various processes, including:

[0032] (a) constructing a graph topological landmark map for calibration;

[0033] (b) extracting features from camera images using a feature extraction neural network, such as CNN;

[0034] (c) integrating measurement data obtained from other sensor data (LiDAR) using a further network, such as GCN;

[0035] (d) comparing extracted features and the calibration landmark map using a recurrent neural network (RNN); and

[0036] (e) estimating the vehicle’s location based on the comparison, during its movement along the road.

[0037] In comparing the extracted features and the calibration landmark map, a matching neural network based on Graph Attention Networks may be used to perform data association to find matching landmark objects in the extract features and the calibration landmark map.

[0038] The method 100 may achieve highly accurate, for example to centimeter accuracy, and precise localization while being robust to changes in the environment, for example, variable traffic flows and different weather conditions.

[0039] An implementation of the method 100 is described in the following paragraphs, with respect to FIGS. 1 to 13.

[0040] FIG. 1 shows an example of the three coordinate systems in the localization framework, according to various embodiments. The three coordinate systems include (i) the world coordinate system 102, (ii) the localization coordinate system 104 and (iii) the vehicle coordinate system 106. The world coordinate system 102 can be set as an Earth global surface coordinate system like the Universal Transverse Mercator (UTM) or even some self-defined coordinate systems in a city. In the following example, the UTM is used for the world coordinate system 102.

[0041] FIG. 2 shows an example of the localization coordinate system 104. The z-axis is left out of the figure for simplicity. The targeted roads are separated into M partitions 202, and each road partition 202 has a reference point as the origin together with a local coordinate system, also referred herein as the localization coordinate system 104. Each partition 202 roughly has length L, also referred herein as the partition length 204, along a road. The output of the localization framework is a position in the localization coordinate system 104 of the relevant road partition m, m = 1, 2,- . . . , M, that the ego-vehicle is currently in. Positions from an earth- fixed localization coordinate system 104 may be transformed to a position in a world coordinate system 102. Beside these two external coordinate systems, the vehicle itself has a coordinate system which is named as the vehicle coordinate system 106. Sensors equipped in the ego vehicle may refer to the vehicle coordinate system 106.

[0042] FIG. 3 shows an example of a sensor setup on a probe vehicle 302 according to various embodiments. The method 100 may include providing a reference map, also referred herein as landmark calibration map. The reference map may be generated based on sensor collected from a calibration run on target roads using the probe vehicle 302. The probe vehicle 302 may be equipped with sensors for acquiring environmental data. The sensors may include cameras, radar, and a LiDAR, like the sensor setup of an autonomous vehicle. In this example, the probe vehicle 302 is equipped with seven radars, one LiDAR sensor, one front camera and four surround view cameras. The field-of-views (FOV) of the respective sensors are indicated as areas within lines 332, 334, 336 and 338, and their FOVs at least partially overlap. The radar sensor is Advanced Radar System (ARS) 430 from Continental Automotive, which is a 77 GHz radar sensor with digital beam-forming scanning antenna which offers two independent scans for far and near range. Four of the seven radars are positioned at the rear of the probe vehicle while three are positioned at the front of the vehicle. The FOVs of the radars are represented by the lines 332. The LiDAR sensor is VLP 32 from Velodyne, which is a long-range three- dimensional LiDAR sensor. The LiDAR may be slightly titled to the front of the probe vehicle, thereby resulting in a blind spot behind the probe vehicle. The FOV of the LiDAR is represented by the line 334. The front camera is positioned next to the LiDAR sensor and is a Blackfly front camera. The FOV of the front camera is represented by the line 336. The surround view cameras are SVC215 or 220 from Continental Automotive. The FOVs of the surround view cameras is represented by the lines 338. [0043] FIG. 4 shows a partial FOV of the LiDAR sensor (as indicated by line 334’) and the FOV of the front camera (as indicated by line 336), according to the example shown in FIG 3. The FOVs of the LiDAR sensor and the front camera may overlap.

[0044] From the sensor data collected in the calibration run on the target roads, data points collected by the LiDAR sensor may be mapped to images captured by the front camera, using the vehicle coordinate system at each time step. In other words, only LiDAR points that overlap with the FOV of the front camera are used, as shown in FIG. 4. The term “frame” is used herein to denote the images and the LiDAR points collected at each time step.

[0045] FIGS. 5 A and 5B show an example of images captured in a time step, according to various embodiments. FIG. 5A illustrates an original image captured by the front camera at a certain time step and FIG. 5B illustrates the visualization of the LiDAR points mapping onto the front camera image.

[0046] To generate the reference map, objects that will be used as vertices may be identified. To this end, images captured by the front camera may be input to a semantic segmentation neural network. The semantic segmentation neural network may output semantic segmentation, for example, the image 602 in FIG. 6, based on the input data. An example of the semantic segmentation neural network may be based on the DeepLabV3 architecture, as disclosed in “Rethinking atrous convolution for semantic image segmentation” by Chen et. al., arXiv preprint arXiv: 1706.05587, 2017, herein incorporated by reference. The semantic segmentation neural network model may be trained on the Cityscapes dataset, as disclosed in “The cityscapes dataset for semantic urban scene understanding” by Cordts et. al., in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213-3223.

[0047] FIG. 6 shows an example of segmentations obtained based on the camera image of FIG. 5 A. The image 602 is an example of semantic segmentation based on the camera image of FIG. 5A. The images 604, 606 and 608 are examples of cropped instance segmentations, obtained from the camera image of FIG. 5A and the semantic segmentation, in this case, the image 602. The cropped instance segmentations may be obtained from the image 602 by cropping the input image based on image plane coordinates of objects in the semantic segmentation image 602. The cropped instance segmentations may be resized to, as an example, 256 x 256 pixels with paddings. The static roadside objects, including traffic lights, traffic signs, poles, road edges, and building windows, may be utilized in the localization framework, while dynamic objects such as vehicles, may be discarded. The dynamic objects are not useful for the localization framework, since their positions are likely to change in the next frame, or next run on the target roads.

[0048] In the following, details of the localization framework are described. The key notations are summarized in the following table:

[0049] The cropped instance segmentations are referred herein as landmark segmentations, if they contain snapshots of static roadside objects. Each landmark segmentation in a road partition m may be denoted as The point clouds reflected from the landmark segmentations s m,j may be denoted as p s m,j. The point clouds p s m,j maybe obtained from the LiDAR points mapping (such as in FIG. 5B) and the pixel positions where the landmark segmentations are in the original camera images (such as in FIG. 5A).

[0050] The tuple is referred herein as a landmark object in the road partition m, if s m,j is a landmark segmentation of the road partition m. A set of N landmark obj ects in the road partition m may be denoted as A set of objects with an arbitrary number of landmark objects rather than N, may be denoted as {d m,J }j.

[0051] The method 100 may further include constructing a graph topological landmark map G m = (V m , E m ) for calibration in each partition m, where m = 1, 2, • • • , M. V m is the set of vertices and E m is the set of edges. The vertices set V m = {d m J }j contains the static roadside landmark objects in each partition m. One vertex in V m is connected to the other vertices as its neighbours, if and only if, they are among the k nearest vertices to the vertex. Here, the coordinate of object in 3D space is computed by taking the average of all LiDAR points where |p s m,j| is the cardinality of p s m,j, and the distance between two objects is measured by the Euclidean distance, L 2 norm.

[0052] FIG. 7 shows an example of a landmark map 700. For simplicity, the landmark map 700 is constructed by using the static objects from the frame collected at the reference point in each road partition. The map quality may be improved by integrating objects from all frames collected in each partition. In this regard, the landmark map is denoted as The landmark map 700 may include a plurality of submaps. Each submap may correspond to a respective partition. Each submap may include a plurality of vertices and a plurality of edges. Each vertex may correspond to a respective landmark segmentation. Each edge may connect two vertices. The method 100 for determining a position of a vehicle may further include performing a localization process, which may be performed in real-time. The localization process may include constructing a measurement map based on sensor data that may be captured and received in real-time. The sensor data may be captured by sensors on the vehicle for whom the positioned is to be determined. This vehicle is also referred herein as the ego vehicle. The ego vehicle may be a different vehicle from the probe vehicle 300. Similar to the steps described above in relation to the landmark map construction, the measurement map may be constructed based on the frame collected at the time step t using the static roadside objects obtained from the image processed by the segmentation neural network and the LiDAR points. A vertex in may be connected to the other neighboring vertices if and only if they are among the k nearest vertices to the vertex.

[0053] The method 100 may further include performing a rough localization to determine which of the landmark map graph that the ego vehicle is currently in. For simplicity, the GPS measured at each time step t may be used to determine the rough location of the ego vehicle. Alternatively, other techniques such as extended Kalman filter and image matching may be used to perform the rough localization.

[0054] FIG. 8 shows a graph topological measurement map constructed at time t and examples of vertices in the measurement graph. [0055] FIG. 9 shows a rough localization obtained using GPS data. If the rough location obtained by GPS or other means, at the time step t shows that the ego vehicle is in road partition

[0056] Next, the localization process may include identifying landmark objects that are common to the landmark map graph and the measurement graph Landmark objects E V t y are considered to be common between the two graphs, if s x 1 and s y 1 are snapshots of the same static roadside object.

[0057] Identifying these common landmark objects may include matching the vertices in the measurement map and the landmark map graph using a Landmark Obj ects Matching Neural Network (LOMNN). The vertices in the measurement map may contain different objects from those in the landmark map graph . The representation may be different even if they are from the same static roadside object, since the data are collected from different locations under different road conditions. The LOMNN is described further with respect to FIGS. 10 and 11.

[0058] When the matched pairs of landmark objects are found, the common landmark objects in two different graphs are extracted. Two sub-graphs containing the common landmark objects may be constructed. The sub-graphs are denoted as respectively. Different from the conventional matching methods for images, the method 100 may not only exploit the images of landmark segmentations but also make full use of the LiDAR points on these segmentations.

[0059] The localization process may further include extracting features from the common landmark objects and comparing the extracted features using a Localization Neural Network (LNN) to perform localization regression. The LNN may compare the real-time measured data to the calibration landmark map, also referred herein as landmark map or reference map. The sub-graphs may be input to the LNN. The LNN may output an estimated location of the ego vehicle with respect to the localization coordinate system in each road partition. The LNN is described further with respect to FIGS. 12 and 13.

[0060] FIG. 10 shows a process flow diagram of the LOMNN according to various embodiments. The LOMNN may be a trained network that is configured to perform a landmark object matching process 1000. The landmark object matching process 1000 determines whether a given pair of landmark objects can be regarded as the same. The LOMNN may label a pair of landmark objects with a matched label “1” if the LOMNN determines them to be matching. The LOMNN may label a pair of landmark objects with a mismatched label “0” if the LOMNN determines them to be mismatched. The LOMNN may include three kinds of neural networks, namely Resnet, GAT and PointNet. The Resnet may be configured to extract features from the images. The GAT may be configured to perform subgraph feature description. The PointNet may be configured to compare features.

[0061] In a first step, features of landmark objects are extracted. Two landmark objects may be chosen arbitrarily from Then, the

Resnets and PointNets may be used to extract the features of . In particular, similar to the conventional image processing methods based on neural networks, the Resnets may be adopted to extract the high dimensional features for the landmark segmentations whose outputs are denoted by . Meanwhile, the PointNets may be exploited to handle the corresponding LiDAR points cloud whose outputs are denoted by Thus, the extracted concatenated features of are denoted by where “||” denotes the concatenation operation.

[0062] In a second step, the GAT may be used to determine feature description. With respect to the landmark objects the corresponding neighborhood maps denoted by may be constructed, where are the set of vertices, while are the set of edges. Specifically, includes the k nearest landmark objects (which is regarded as the i- th vertex in 1 denotes the corresponding connection relationships matrix for these landmark objects. denotes the set of subscripts of k nearest vertices to the i-th vertex in . Similarly, are the notations like Then, the landmark objects in in the corresponding neighborhood maps may be processed by Resnets and PointNets to extract the corresponding features.

[0063] The feature extraction process may include processing the landmark segmentations in by Resnets while the corresponding LiDAR point clouds may be processed PointNets. Furthermore, the features of landmark objects and the corresponding connection relationship matrices in the may be handled by GAT to describe their features containing spatial information. Specifically, taking the graph as an example, the attention mechanism of graph attentional layer with respect to the pair of vertices (it, v) is described as where (·) T denotes transposition, || is the concatenation operation, ξ uv is the attention coefficient of vertex v to it, W denotes the weight matrix for each vertex, and a is the weight vector. LeakyReLU nonlinearity may be used here.

[0064] When the multi-head attention is considered, that is, M independent attention mechanisms are used to stabilize the learning process, the output features of each vertex (after applying a nonlinearity σ ) are given by: the output of the final (prediction) layer may be described as

The whole output from the GAT, namely may include all

[0065] In a third step, outputs from the previous two steps may be concatenated as the inputs for the PointNet, as the similarity measure for the feature comparison. Multi-Layer Perceptron (MLP) may be introduced to output the predicted labels, where the loss function is the crossentropy loss, in which denotes the true label and denotes the predicted label.

[0066] In a fourth step, a matched sub-graph may be constructed. After identifying the common landmark objects in the landmark map graph and the measurement graph, two sub-graphs containing the common landmark objects may be generated. The two sub-graphs may be denoted as respectively. In two vertices are connected to each other if and only if the two vertices are connected to each other in . Similarly, in two vertices are connected to each other if and only if the two vertices are connected to each other in

[0067] Experiments were carried out to evaluate the performance of the LOMNN. Real sensor data from Continental Automotive Pte Ltd was used to train and test the model. The sensor data captured landmark segmentations including traffic lights, traffic signs and lamp posts, for which there were also LiDAR points. About 5,000 pairs of segmentations were used, in which 80% pairs with labels are training data and 20% pairs are testing data. In the training process, there are 100 epochs for the model to reveal the corresponding matching performance. Accuracy score, Recall score and Fl score are considered to show the matching performance. In addition, different number for k nearest neighbors in neighborhood maps are also chosen in the proposed model so that it can be evaluated if number k has great impact on the matching performance.

[0068] FIG. 11 shows graphs 1110, 1120 and 1130 that measure the performance of the LOMNN. The graph 1110 shows the accuracy score of the output of the LOMNN as a function of the number of epochs, for fc=2, fc=4, and k=6. The graph 1120 shows the recall score of the output of the LOMNN as a function of the number of epochs, for k=2, k=4, and k=6. The graph 1130 shows the Fl score of the output of the LOMNN as a function of the number of epochs, for k=2, k=4, and k=6. The graphs show that the LOMNN can achieve over 98% for each of accuracy, recall and F 1 scores after training for more than 40 epochs. Also, the different values of k, i.e. the number of neighbors, did not result in substantial changes in the performance of the LOMNN.

[0069] FIG. 12 shows a process flow diagram of the LNN according to various embodiments. The LNN may first use a first extraction network f x (·) and a second extraction network f y (-) to extract features of landmark objects = i n respectively, f x (·) and f y (·) may have the same network architecture but separate trainable parameters, which is depicted in FIGS. 12 and 13. Similar to the LOMNN operation shown in FIG. 10, Resnets R x (·) and R y (·) may be used to extract features from the common landmark segmentations. Additionally, PointNets P x (·) and P y may extract features from The PointNets may be useful for reasoning about unordered data (e.g. LiDAR points) with arbitrary numbers, at least partly due to the max-pooling operator in PointNet.

[0070] The Graph Attention Networks (GAT) is represented by g x (·) and g y (·) Like f x (·) and f y (·), g x (·) and g y (·) may have the same network architecture but separate trainable parameters/ g x (·) may further extract features for each landmark object from the graph structured data with and where is the concatenated features of vertex i, and || denotes the concatenation operation. The similar operation may be performed by g xy (·) on

[0071] The LNN may optionally further include a Recurrent Neural Network (RNN), such as a long-short-term memory (LSTM). The RNN may receive the output of the GAT and may perform learning from measurements history during the driving time. The RNN may improve the performance of the LNN through the learning from the historical data, as compared to when the LNN uses only the current collected data at time t.

[0072] A new PointNet may then perform a comparison for the extracted features from the pairs of common landmark objects. A multilayer perceptron (MLP) may be applied to perform the neural network regression using the output of the final features from this PointNet.

[0073] The loss function of the LNN may be the Huber loss function: where g is the ground truth location and g is the model output. During the training of the LNN, the ground truth matching of objects may be intentionally perturbed with small probability (less than 0.1), which makes the test result more promising.

[0074] Experiments were conducted using the LNN to validate its performance. For the experiments, the RNN was excluded for simplicity. The experimental results will be described in the following paragraphs.

[0075] The LNN was evaluated using two artificial datasets generated by the CARLA simulator and a real dataset provided by Continental Automotive Pte. Ltd. The CARLA simulator is disclosed in “CARLA: An open urban driving simulator” by Dosovitskiy et. al. in Proceedings of the 1st Annual Conference on Robot Learning, 2017, pp. 1-16.

[0076] FIG. 14 shows the artificial datasets generated by the CARLA simulator. The datasets include a first CARLA dataset 1402 and a second CARLA dataset 1404. The two datasets have different driving traces in a common town. Each of the first and second CARLA datasets were split into 80% for training, and 20% of testing. Traffic lights, poles, traffic signs, roadside objects (chairs, flowerpots, etc.) were added to the artificial datasets as landmark objects. The partition length is set as length L = 2 (metres). The LNN is first trained using the training portion of the first CARLA dataset 1402, and then tested using the testing portion of the same dataset. [0077] The test results for the first CARLA dataset 1402 with L = 2, stated in terms of the root-mean square error (RMSE), is 0.015m in the x-direction and 0.066m in the y-direction. The inference time is 0.3s. The results are also shown visually in FIG. 15.

[0078] FIG. 15 shows the test results testing the LNN using the first CARLA dataset 1402, represented by a plot 1500A and a bar chart 1500B. The plot 1500A includes ground truth data points 1502, indicated as black circles in the plot 1500A, and output data points 1504 which are indicated as white circles in the plot 1500A. The output datapoints 1504 were generated by the LNN based on the testing portion of the first CARLA dataset 1402, after it was trained using the training portion of the first CARLA dataset 1402. The bar chart 1500B shows the percentage of output data points in relation to their distance error. About 90% of the output data points achieved a distance error of less than 0.05m.

[0079] Next, the LNN trained using the first CARLA dataset 1402, was tested using the second CARLA dataset 1404. The test results are for the second CARLA dataset 1404 with L = 2, stated in terms of the root-mean square error (RMSE), is 0.021m in the x-direction and 0.078m in the y-direction. The inference time is 0.3s. More than 80% of the output data points achieved a distance error of less than 0.05m. While the LNN performed slightly worse with unseen data from a different driving trace, the localization accuracy remains sufficiently high for practical applications.

[0080] The trained LNN was also tested using real dataset provided by Continental Automotive Pte Ltd. For the real dataset, odometry results obtained using pairs of adjacent frames were used instead of localization results, as the labels were not fully provided in the real dataset. The test results for the real dataset, stated in terms of the root-mean square error (RMSE), is 0.001m in the x-direction and 0.015m in the y-direction. The inference time is 0.3s. The results are also shown visually in FIG. 16.

[0081] FIG. 16 shows the test results of testing the LNN using the real dataset, represented by a plot 1600 A and a bar chart 1600B. The plot 1600 A includes ground truth data points 1602, indicated as black circles in the plot 1600 A, and output data points 1604 which are indicated as white circles in the plot 1600 A. The bar chart 1600B shows the percentage of output data points in relation to their distance error. From the test results, it is apparent that the centimeter localization accuracy is available for the real dataset under the criterion of Root Mean Squared Error (RMSE). There are over 90% prediction results whose distance errors are less than 0.05 metre. Moreover, the mean of inference time in the real dataset is acceptable in practice. As such, the test results validated the feasibility of the proposed method for determining a position of a vehicle.

[0082] FIG. 17 shows a flow diagram of the method 100 for determining a position of a vehicle according to various embodiments.

[0083] FIG. 18 shows a simplified schematic diagram of a localization framework 1800 according to various embodiments.

[0084] Referring to both FIGS. 17 and 18, the method 100 may include the following steps: in a first step 1702, a reference map 1802 is provided. The reference map 1802 comprises segmentations of a reference image with landmarks. In a second step 1704, a measurement image 1804 of a vehicle environment is captured. In a third step 1706, segmentations of the measurement image 1804 and neighborhood graphs are determined to obtain a measurement map, wherein a segmentation is represented by a vertex and where a neighborhood graph comprises the vertex and edges containing information to identify neighboring vertices of the vertex. In a fourth step 1708, segmentations of the reference image (as provided in the reference map 1802) are compared with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs and segmentations contained in the reference image and in measurement image are determined. In a fifth step 1710, the vehicle’s location/position is estimated, i.e., determined, with reference to the reference map 1802 during its movement along a road based on a result of the comparison.

[0085] The LOMNN 1810 may find matching landmark objects that appear in both the reference map 1802 and the measurement map 1806. The LNN 1812 may output the vehicle position estimate 1808 based on the matching results provided by the LOMNN 1810, and the reference map 1802.

[0086] In the context of the present invention, a segmentation can be regarded as a part of an image. As a simple example, a grid is laid over the image, and a segmentation may be a rectangular grid element in this case. However, the segmentation may be of any shape. In the described method, the segmentations are determined such that they correspond to landmark objects, as described further below in more detail. Although the complete image may be divided into segmentations, segmentations of most interest for the algorithm and method described herein may contain a landmark or a detail of a landmark. The complete image which is divided into segmentations is also referred to as “frame” herein.

[0087] The reference map 1802 is determined once in a first phase of the method and is used for calibration purposes. The term “reference image” is also referred to “reference map image”. The expression “map” refers in general to a set of graphs containing vertices and their edges”. The reference map contains segmentations of static landmarks. Therefore, the reference map is also referred to herein, as landmark map, or landmark calibration map.

[0088] As will be explained hereinafter in more detail in the context of particular embodiments, the fifth step 1710, i.e., the estimation of the vehicle’s location/position with reference to the reference map 1802 during its movement along a road is based on a matching. The matching may be performed by the LOMNN 1810.

[0089] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step comparing segmentations of the reference image with the segmentations represented by the vertices of the measurement image 1804 and the neighborhood graphs comprises the step: performing a rough localization to determine a road partition, which the vehicle is currently in, and selecting a reference image from the reference map 1802 that is related to the road partition. A rough position may be used for determining a reference image of the reference map 1802 uch that the content of the images is similar, that is, contains at least in parts the same object. The rough position may be estimated using an external system such as a navigation system. The navigation system may be a satellite system such as a GNSS system, which may be augmented by an overlay system such as EGNOS or DGNSS (differential GNSS), and/or which may be supported by additional on-board sensors. As is understood by the skilled reader, also other systems can be used.

[0090] The expression “rough localization” means that the accuracy of the localization should be sufficient to determine the road partition in which the vehicle is driving at the time of measurement. The accuracy may, for example be half the length interval of the road partition, that is, for example, in the order of half a meter, a meter or even several meters. At the border of a road partition, it is allowed that the determined road partition is falsely a next road partition. [0091] The vehicle’s location is hence estimated from a regression neural network by comparing the extracted features from the equipped sensors’ real-time measurements and the calibration landmark map. By performing the matching between the vertices in the graph of the measurement map and the landmark map common objects are determined, that is snapshots of the same static roadside object.

[0092] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step comparing segmentations of the reference map with the segmentations represented by the vertices of the measurement image and the neighborhood graphs comprises selecting an object from a set of objects contained in an image, wherein a segmentation represents an object. In this embodiment, the images contain a set of objects. The segmentations are defined such that they contain an object. The objects for the comparison may be selected randomly. The comparison including the selection are therefore performed repeatedly.

[0093] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step comparing segmentations of the reference image with the segmentations represented by the vertices of the measurement image and the neighborhood graphs comprises extracting features from a segmentation of a selected object by a first Resnet, and extracting features from the LiDAR data of the selected object by a first PointNet and providing the extracted features to a common PointNet. As is understood by the skilled reader, a PointNet is a simple and effective Neural Net for point cloud recognition, and a ResNet is known as residual neural network, which is a deep learning model. In particular, similar to the conventional image processing methods based on neural networks, the Resnets are adopted to extract the high dimensional features for the segmentations. The PointNets are exploited to handle the corresponding LiDAR data, that is, the LiDAR points cloud.

[0094] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step of comparing segmentations of the reference map with the segmentations represented by the vertices of the measurement image and the neighborhood graphs comprises additionally: extracting features from neighbor segmentations of the segmentation of the selected object and providing the extracted features to a GAT, extracting features from LiDAR points cloud of a neighboring object of the selected object and providing the extracted features to the GAT, and describing the extracted features containing spatial information, and providing the described reference image features to the common PointNet. This corresponds to the previous embodiment, however using neighborhood graphs, that is, the vertices neighbored to the vertex or segmentation of the selected object.

[0095] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the previous steps are performed for the reference image and the measurement image, and the method further comprises the step concatenating the extracted features from the first ResNet, the first PointNet and the GAT and determining a similarity between the features of the object of the reference image and the features of the object of the measurement image. In other words, all previously described outputs, i.e. from the first Resnet and first PointNet, and of the GAT are provided to the common PointNet, which actually performs the comparison.

[0096] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step determining a similarity between the features of the object of the landmark map network part and the features of the object of the measurement map part comprises predicting labels by a Multi-Layer Perceptron (MLP) and calculating a loss function by calculating a cross entropy. The cross entropy contains the true label and predicted label.

[0097] According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step providing a reference map, the reference map comprising segmentations with landmarks comprises the sub-steps: capturing LIDAR data points and a reference map image along a road for a road partition, mapping the LIDAR data points to the reference image, determining objects on the reference image and landmark segmentations from the reference image using a semantic segmentation neural network, and constructing a graph topological landmark map containing vertices corresponding each to a segmentation and edges, where an edge identifies neighboring vertices of an vertex.

[0098] Images are captured in pre-defined length intervals along the road. These intervals in terms of distance a referred to road partitions. The reference map image contains at least this part of the road and the environment visible from the point where the image is captured, i.e. the beginning of the interval.

[0099] For the determination of objects, a semantic segmentation neural network is used. By the semantic segmentation an intermediated image is obtained that distinguishes object types such as buildings, roads, trees, traffic lights, etc., where details and colors, shades etc. are disregarded, so that it contains semantic information. This intermediate image is used to determine objects which are mapped to segmentations. That is, the segmentations are cropped from the original image using the semantic information of the intermediate image. Only static roadside objects are regarded while dynamic objects are omitted. The resulting segmentation are also referred to as (cropped) instance segmentations or landmark segmentations. The cropped segmentations may be resized to, for example, 256 x 256 pixels with paddings.

[00100] A vertex corresponds to a segmentation, and a segmentation corresponds to an object. The object has a coordinate that is determined by averaging over the captured LiDAR points. The semantic segmentation neural network may be based, for example on a so-called DeepLabV3 architecture.

[00101] According to various embodiments, a computer program element for determining matching landmark objects is provided that comprises a landmark map network part and measurement map part. Each of the landmark map network part and measurement map part comprises an object selection module configured to select an object from a set of objects contained in an image, a Resnet configured to extract features from a first segmentation of the selected object and providing the extracted features to a common PointNet, a PointNet configured to extract features from a first LiDAR point cloud of the selected object and providing the extracted features to the common PointNet, a second Resnet configured to extract features from neighbor segmentations of the first segmentation and providing the extracted features to a GAT, and a second PointNet configured to extract features from LiDAR points cloud of the neighboring object of the selected object and providing the extracted features to a GAT. The GAT is configured to describe the extracted features containing spatial information, and to provide the described landmark map features to the common PointNet. The common PointNet configured to concatenate the extracted features from the first ResNet, the first PointNet and the GAT and to determine a similarity between the features of the object of the landmark map network part and the features of the object of the measurement map part.

[00102] The Resnets are adopted to extract the high dimensional features for the landmark segmentations

[00103] The computer program element defined in this way is also referred to as Landmark objects matching neural network, LOMNN, herein. As is clear to the skilled reader, any module described herein is to be understood as a software module. It is known to a skilled person that any software operation may also be performed by hardware, that is, any a software module may be replaced by a hardware module performing the same functions.

[00104] The computer program element may be part of a computer program, but it can also be an entire program by itself. For example, the computer program element may be used to update an already existing computer program to get to the present invention.

[00105] According to various embodiments, a computer readable medium on which a computer program element is provided.

[00106] The computer readable medium may be seen as a storage medium, such as for example, a USB stick, a CD, a DVD, a data storage device, a hard disk, or any other medium on which a program element as described above can be stored. [00107] While embodiments of the invention have been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced. It will be appreciated that common numerals, used in the relevant drawings, refer to components that serve a similar or the same purpose.

[00108] It will be appreciated to a person skilled in the art that the terminology used herein is for the purpose of describing various embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[00109] It is understood that the specific order or hierarchy of blocks in the processes / flowcharts disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes / flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

[00110] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims.