Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR RECOGNITION BASED ON ONTOLOGICAL COMPONENTS
Document Type and Number:
WIPO Patent Application WO/2023/248091
Kind Code:
A1
Abstract:
The disclosed systems, components, methods, illustrates implementations of the technology focusing on associating an object-to-be-inferred to an object-of-interest. In one context, the system may be a prediction and/or confidence engine aiming at identifying objects based on ontological components of the object-of-interest. In order to provide sufficient reliability, the system generates a dataset of ontological component representations, also referred to as unique information

Inventors:
MARTIN BRYAN (CA)
WARSHE WRUSHABH (CA)
KEATING DYLAN (CA)
JUPPE LAURENT (CA)
ABUELWAFA SHERIF ESMAT OMAR (CA)
MAJEWSKI YANN (CA)
LE CARLUER LIONEL (CA)
BLONDEL DANAE (CA)
Application Number:
PCT/IB2023/056290
Publication Date:
December 28, 2023
Filing Date:
June 17, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPLICATIONS MOBILES OVERVIEW INC (CA)
OVERVIEW SAS (FR)
International Classes:
G06V20/64
Foreign References:
US20130153651A12013-06-20
US20210158017A12021-05-27
Attorney, Agent or Firm:
LACHERÉ, Julien (CA)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computer-implemented method for generating unique information associated with an object- of-interest, the method comprising: obtaining ontological components associated with an object-of-interest; applying a routing operation to the ontological components, the routing operation providing a list of connections between each of the ontological components and encoding operations; applying the encoding operations to each of the ontological components, the encoding operation being configured to obtain unique information including (i) unique feature information, (ii) unique encoding operation information, and (iii) unique confidence information, the unique information being associated with the object-of-interest; and adding the unique information corresponding to the object-of-interest to a unique information database.

2. The method of claim 1, wherein the object-of-interest is a non-synthetic object-of-interest.

3. The method of claim 2, wherein at least one of the ontological components is captured by an imaging system.

4. The method of claim 3, further comprising capturing a plurality of representations of the object- of-interest, each representation being captured from a corresponding point of view, each representation being associated with feature data, the feature data including information about 3D coordinates of the corresponding point of view in a global coordinate system, and/or 3D coordinates of a set of feature points of the object in the global coordinate system.

5. The method of claim 3 or 4, further comprising applying a setting configuration on the imaging system that capture setting parameters thereof match pre-determined capture setting, the capture settings including one or more of extrinsic parameters of the imaging system, intrinsic parameters of the imaging system, position, orientation, aiming point, focal length, focal length range, pixel size, sensor size, position of a principal point on the object-of-interest, an lens distortion of the imaging system.

6. The method of claim 1, wherein the object-of-interest is a synthetic object-of-interest.

7. The method of claim 6, wherein at least one of the ontological components is obtained using a virtual capture method.

8. The method of claim 7, wherein the virtual capture method includes specified including one or more of a location of the object-of-interest, a number of virtual cameras used by the virtual capture method, parameters of the virtual cameras, an aiming point on the object-of-interest, and a region on the obj ect-of-interest.

9. The method of any one of claims 1 to 8, wherein each of the ontological components is selected from color, color transformations, depth, heat, 2D images, partial 2D images, 3D point clouds, partial 3D point clouds, a mesh, a continuous function, 3D descriptors, shape descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion and a combination thereof.

10. The method of any one of claims 1 to 9, wherein each of the ontological components is obtained using a machine learning algorithm or an augmented reality technique.

11. The method of any one of claims 1 to 10, further comprising storing the ontological components on an ontological component database.

12. The method of any one of claims 1 to 11, further comprising applying a weight and/or a bias to at least one of the ontological components.

13. The method of claim 12, wherein the bias has a constant value defining a depth and/or an influence on the at least one ontological component on which the bias is applied.

14. The method of claim 12, wherein the weight defines a depth and/or an influence on the at least one ontological component on which the bias is applied.

15. The method of any one of claims 1 to 14, further comprising applying a data augmentation to at least one of the ontological components.

16. The method of claim 15, wherein applying a data augmentation to at least one of the ontological components comprises adding a noise to the at least one of the ontological components.

17. The method of claim 16, wherein the noise is based on a random parameter.

18. The method of claim 17, wherein the noise is selected from a White Gaussian noise, a Voroni noise, and a Fractal noise.

19. The method of claim 17, wherein the noise is selected a Salt and Pepper noise, a film grain noise, a fixed-pattern noise, a Perlin noise, a simplex noise, and Poisson noise, the noise being applied on a surface of a 3D model of the object-of-interest.

20. The method of any one of claims 16 to 19, wherein the noise is constrained such that a perturbation associated with the augmented ontological component is within a finite envelope.

21. The method of claim 15, wherein applying a data augmentation to at least one of the ontological components comprises applying one or more geometric transformations on the ontological component.

22. The method of claim 21, wherein each of the one or geometric transformations includes one or more of changing a size of the ontological component, applying a rotation to the ontological component, applying shifting and/or translation to the ontological component, scaling the ontological component, rotating the ontological component, translating the ontological component, applying a scaling translation to the ontological component, shearing the ontological component, applying a non-uniform scaling to the ontological component, forming a matrix representation of the ontological component, applying a 3D affine transformation to the ontological component, applying matrix operations to the ontological component, applying a reflection to the ontological component, dilating the ontological component, applying a tessellation to the ontological component, and applying a projection to the ontological component.

23. The method of claim 21 or 22, wherein at least one of the one or more geometric transformations is based on a random parameter.

24. The method of any one of claims 15 to 23, wherein the data augmentation applied to at least one of the ontological components is further determined by an influence parameter.

25. The method of any one of claims 16 to 20, wherein an influence parameter determines at least in part a probability, percentage and/or intensity of the noise.

26. The method of any one of claims 21 to 23, wherein an influence parameter determines at least in part a probability, percentage and/or intensity of the one or more geometric transformations.

27. The method of claim 11 or 12, wherein the list of connections comprises a predefined list of connections between the ontological component database and an encoding module applying the encoding operations.

28. The method of any one of claims 1 to 27, wherein the routing operation determines modalities of 2D images and 3D point clouds as a plurality of ontological components for application by the encoding operations.

29. The method of any one of claims 1 to 27, wherein the routing operation is further determined by an influence parameter.

30. The method of claim 29, wherein the influence parameter is related to one or more of a depth or color on the object-of-interest.

31. The method of claim 30, wherein: the routing operation applies a first influence level to the depth and/or color on the object-of- interest; and the influence parameter causes to apply to the depth and/or color on the object-of-interest a second influence level greater than the first influence level in a subsequent routing operation.

32. The method of claim 29, wherein the influence parameter is related to a partial 3D representation of the object-of-interest.

33. The method of claim 32, wherein: the routing operation applies a first influence level to the partial 3D representation of the object- of-interest; and the influence parameter causes to apply to the partial 3D representation of the object-of-interest a second influence level greater than the first influence level in a subsequent routing operation.

34. The method of any one of claims 1 to 33, further comprising: combining two ontological components before the encoding operations by (i) removing correlation data therefrom and/or (ii) fusing the two ontological components to a lower dimensional common space.

35. The method of any one of claims 1 to 33, further comprising: applying a first encoding operation to a first ontological component; applying a second encoding operation to a second ontological component; and after the first and second encoding operations, combining the first and second ontological components by (i) removing correlation data therefrom and/or (ii) fusing the first and second ontological components to a lower dimensional common space.

36. The method of any one of claims 1 to 33, further comprising: applying a first encoding operation to a first ontological component; applying a second encoding operation to a second ontological component; applying a third encoding operation to a third ontological component; and after the first, second and third encoding operations, combining the first, second and third ontological components by (i) removing correlation data therefrom and/or (ii) fusing the first, second and third ontological components to a lower dimensional common space.

37. The method of any one of claims 1 to 33, further comprising: applying a first encoding operation to a first ontological component; applying a second encoding operation to a second ontological component; applying a third encoding operation to a third ontological component; and after the first and second encoding operations, combining the first and second ontological components by (i) removing correlation data therefrom and/or (ii) fusing the first and second ontological components to a lower dimensional common space; after the first, second and third encoding operations, combining the combined first and second ontological components with the third ontological component by (i) removing correlation data therefrom and/or (ii) fusing the combined first and second ontological components with the third ontological component to a lower dimensional common space.

38. The method of any one of claims 1 to 35, wherein adding the unique information corresponding to the object-of-interest to the unique information database comprises ID information related to the obj ect-of-interest.

39. The method of claim 38, wherein the ID information is selected from text, numbers, brand information, serial number, model information, time, date, location, distance and a combination thereof.

40. The method of any one of claims 38 or 39, further comprising using the ID information to define closest-match data for configuring the routing operation, the encoding operations and the obtention of the ontological components.

41. The method of any one of claims 12 to 14, further comprising storing in the unique information database the weight and/or the bias value applied to the at least one of the ontological components.

42. The method of any one of claims 15 to 23 further comprising storing in the unique information database the data augmentation applied to the at least one of the ontological components.

43. The method of any one of claims 1 to 42, further comprising storing the list of connections between each of the ontological components and the encoding operations in the unique information database.

44. The method of any one of claims 1 to 43, wherein the unique information comprises one or more of dense representations of the object-of-interest, at least one encoding method, encoding preconfiguration to differentiate the ontological components, training data, loopback routine parameters, influence parameters, and data fusion parameters.

45. A computer-implemented method for authenticating an object-to-be-inferred, the method comprising: generating first unique information associated with an object-of-interest using the method as defined in any one of claims 1 to 44; generating second unique information associated with the object-to-be-inferred using the obtaining operation, the routing operation, and the encoding operations of the method as defined in any one of claims 1 to 44; and evaluating a confidence level for the object-to-be inferred by comparing the first unique information associated with the object-of-interest stored in the unique information database with the second unique information associated with the object-to-be inferred, wherein the object- to-be inferred is deemed authenticated if the confidence level at least meets or exceeds a predetermined confidence threshold.

46. A computer-implemented method to generate unique information associated with an object-of- interest comprising: obtaining ontological components of an object-of-interest, each of the ontological components associated with the object-of-interest having been captured by an imaging system; applying a routing operation to the ontological components, the routing operation configured to provide a list of connections between each of the ontological components and encoding operations; applying the encoding operations to each of the ontological components, the encoding operation configured to obtain unique feature information, unique encoding operation information, and unique confidence information associated with the object-of-interest; and adding the unique information corresponding to the object-of-interest to a database.

47. The method of claim 46, wherein at least one ontological component is obtained.

48. The method of claim 46, wherein at least one influence parameter is applied to the at least one ontological component.

49. The method of claim 46, wherein the routing operation is determined by the at least one ontological component.

50. The method of claim 46, wherein the routing operation is further determined by the at least one plurality of ontological components.

51. The method of claim 48, wherein the routing operation is further determined by the at least one influence parameter.

52. The method of claim 46, wherein the routing operation is further determined by at least one data fusion method.

53. The method of claim 46, wherein the encoding operations comprises of at least one encoder, a neural network, a learned network, a machine learning algorithm, one or more encoding operations .

54. The method of claim 53, wherein the encoding operations further comprises at least one loopback routine that incorporates random parameters.

55. The method of claim 46, wherein the encoding operations is further determined by at least one influence parameter.

56. The method of claim 46, wherein the encoding operations is further determined by at least one data fusion method.

57. The method of claim 46, where in the object-of-interest of interest is non-synthetic or synthetic.

58. The method of claim 46, wherein the imaging system is real or virtual. 59. The method of claim 46, wherein the capturing of the ontological components by the imaging system comprises applying a setting configuration such that capture setting parameters match a set of pre- determined capture setting parameters.

60. The method of claim 46, wherein ID information from an object-to-be-inferred is used to determine closest-match unique data contained in the database. 61. The method of claim 60, wherein the closest-match unique data is used to determine a sensing device method, ontological components, a routing method, an encoding operation, influence parameters and a data fusion method.

62. The method of claim 61 , wherein unique data associated with an object-to-be-inferred is obtained.

63. The method of any one of claims 60 to 62, wherein a confidence and/or prediction value is obtained considering the closest-match unique data associated with an object-of-interest and the unique data associated with the object-to-be- inferred.

64. A system configured to execute computer-readable instructions, the instructions, upon being executed, causing execution of the method of any of claims 1 to 63.

Description:
SYSTEM AND METHOD FOR RECOGNITION BASED ON ONTOLOGICAL COMPONENTS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] The present application claims priority from European Patent Application No. 22179939.8, filed on June 20, 2022, the disclosure of which is incorporated by reference herein it is entirety.

FIELD OF INVENTION

[0002] The present disclosure illustrates implementations of the technology focusing on associating an object-to-be-inferred to an object-of-interest. In one context, the system may be a prediction and/or confidence engine aiming at identifying objects based on ontological components of the object. In order to provide sufficient reliability, the system generates a dataset of ontological component representations, also referred to as unique information.

BACKGROUND

[0003] Object recognition and search methods are of major concern across many industries. Approaches such as 2D picture analysis, optical character recognition (O.C.R.), QR Codes, Bar-Codes, geolocation, color and bit mapping have proved useful in particular cases, but have limitations in real world applications, specifically in the areas of identification, authentication and ontological recognition. Indeed, an improvement on current methods can be obtained if 3D features are included in recognition operations. Those features include peaks, tops, edges, shapes, reliefs, etc.

[0004] QR codes, barcodes and other conventional comply with known formats and are easily recognizable using simple imaging techniques. Such codes or markers are easily reproduceable. Although quite useful in many applications, these codes and markers do not provide sufficient unicity to allow authenticating the objects on which they are applied.

SUMMARY

[0005] Implementations of the present technology have been developed based on developers’ appreciation of at least one technical problem associated with the prior art solutions. [0006] In particular, such shortcomings may comprise (1) the inability to recognize 3D features; (2) the inability to utilize a broad range of modalities; and/or (3) the lack of a highly adaptable and configurable recognition system; and/or (4) the lack of sufficient unicity for authentication purposes.

[0007] In accordance with a first aspect of the present technology, there is provided a

[0008] In accordance with a first aspect of the present technology, there is provided a computer- implemented method for generating unique information associated with an object-of-interest, the method comprising: obtaining ontological components associated with an object-of-interest; applying a routing operation to the ontological components, the routing operation providing a list of connections between each of the ontological components and encoding operations; applying the encoding operations to each of the ontological components, the encoding operation being configured to obtain unique information including (i) unique feature information, (ii) unique encoding operation information, and (iii) unique confidence information, the unique information being associated with the object-of-interest; and adding the unique information corresponding to the object-of-interest to a unique information database.

[0009] In some implementations, the object-of-interest is a non-synthetic object-of-interest.

[0010] In some implementations, at least one of the ontological components is captured by an imaging system.

[0011] In some implementations, the method further comprises capturing a plurality of representations of the object-of-interest, each representation being captured from a corresponding point of view, each representation being associated with feature data, the feature data including information about 3D coordinates of the corresponding point of view in a global coordinate system, and/or 3D coordinates of a set of feature points of the object in the global coordinate system.

[0012] In some implementations, the method further comprises applying a setting configuration on the imaging system that capture setting parameters thereof match pre-determined capture setting, the capture settings including one or more of extrinsic parameters of the imaging system, intrinsic parameters of the imaging system, position, orientation, aiming point, focal length, focal length range, pixel size, sensor size, position of a principal point on the object-of-interest, an lens distortion of the imaging system.

[0013] In some implementations, the object-of-interest is a synthetic object-of-interest.

[0014] In some implementations, at least one of the ontological components is obtained using a virtual capture method.

[0015] In some implementations, the virtual capture method includes specified including one or more of a location of the object-of-interest, a number of virtual cameras used by the virtual capture method, parameters of the virtual cameras, an aiming point on the object-of-interest, and a region on the obj ect-of-interest.

[0016] In some implementations, each of the ontological components is selected from color, color transformations, depth, heat, 2D images, partial 2D images, 3D point clouds, partial 3D point clouds, a mesh, a continuous function, 3D descriptors, shape descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion and a combination thereof.

[0017] In some implementations, each of the ontological components is obtained using a machine learning algorithm or an augmented reality technique.

[0018] In some implementations, the method further comprises storing the ontological components on an ontological component database.

[0019] In some implementations, the method further comprises applying a weight and/or a bias to at least one of the ontological components.

[0020] In some implementations, the bias has a constant value defining a depth and/or an influence on the at least one ontological component on which the bias is applied.

[0021] In some implementations, the weight defines a depth and/or an influence on the at least one ontological component on which the bias is applied.

[0022] In some implementations, the method further comprises applying a data augmentation to at least one of the ontological components. [0023] In some implementations, applying a data augmentation to at least one of the ontological components comprises adding a noise to the at least one of the ontological components.

[0024] In some implementations, the noise is based on a random parameter.

[0025] In some implementations, the noise is selected from a White Gaussian noise, a Voroni noise, and a Fractal noise.

[0026] In some implementations, the noise is selected a Salt and Pepper noise, a film grain noise, a fixed-pattern noise, a Perlin noise, a simplex noise, and Poisson noise, the noise being applied on a surface of a 3D model of the object-of-interest.

[0027] In some implementations, the noise is constrained such that a perturbation associated with the augmented ontological component is within a finite envelope.

[0028] In some implementations, applying a data augmentation to at least one of the ontological components comprises applying one or more geometric transformations on the ontological component.

[0029] In some implementations, each of the one or geometric transformations includes one or more of changing a size of the ontological component, applying a rotation to the ontological component, applying shifting and/or translation to the ontological component, scaling the ontological component, rotating the ontological component, translating the ontological component, applying a scaling translation to the ontological component, shearing the ontological component, applying a non-uniform scaling to the ontological component, forming a matrix representation of the ontological component, applying a 3D affine transformation to the ontological component, applying matrix operations to the ontological component, applying a reflection to the ontological component, dilating the ontological component, applying a tessellation to the ontological component, and applying a projection to the ontological component.

[0030] In some implementations, at least one of the one or more geometric transformations is based on a random parameter.

[0031] In some implementations, the data augmentation applied to at least one of the ontological components is further determined by an influence parameter. [0032] In some implementations, an influence parameter determines at least in part a probability, percentage and/or intensity of the noise.

[0033] In some implementations, an influence parameter determines at least in part a probability, percentage and/or intensity of the one or more geometric transformations.

[0034] In some implementations, the list of connections comprises a predefined list of connections between the ontological component database and an encoding module applying the encoding operations.

[0035] In some implementations, the routing operation determines modalities of 2D images and 3D point clouds as a plurality of ontological components for application by the encoding operations.

[0036] In some implementations, the routing operation is further determined by an influence parameter.

[0037] In some implementations, the influence parameter is related to one or more of a depth or color on the object-of-interest.

[0038] In some implementations, the routing operation applies a first influence level to the depth and/or color on the object-of-interest; and the influence parameter causes to apply to the depth and/or color on the object-of-interest a second influence level greater than the first influence level in a subsequent routing operation.

[0039] In some implementations, the influence parameter is related to a partial 3D representation of the object-of-interest.

[0040] In some implementations, the routing operation applies a first influence level to the partial 3D representation of the object-of-interest; and the influence parameter causes to apply to the partial 3D representation of the object-of-interest a second influence level greater than the first influence level in a subsequent routing operation.

[0041] In some implementations, the method further comprises combining two ontological components before the encoding operations by (i) removing correlation data therefrom and/or (ii) fusing the two ontological components to a lower dimensional common space. [0042] In some implementations, the method further comprises applying a first encoding operation to a first ontological component; applying a second encoding operation to a second ontological component; and after the first and second encoding operations, combining the first and second ontological components by (i) removing correlation data therefrom and/or (ii) fusing the first and second ontological components to a lower dimensional common space.

[0043] In some implementations, the method further comprises applying a first encoding operation to a first ontological component; applying a second encoding operation to a second ontological component; applying a third encoding operation to a third ontological component; and after the first, second and third encoding operations, combining the first, second and third ontological components by (i) removing correlation data therefrom and/or (ii) fusing the first, second and third ontological components to a lower dimensional common space.

[0044] In some implementations, the method further comprises applying a first encoding operation to a first ontological component; applying a second encoding operation to a second ontological component; applying a third encoding operation to a third ontological component; and after the first and second encoding operations, combining the first and second ontological components by (i) removing correlation data therefrom and/or (ii) fusing the first and second ontological components to a lower dimensional common space; after the first, second and third encoding operations, combining the combined first and second ontological components with the third ontological component by (i) removing correlation data therefrom and/or (ii) fusing the combined first and second ontological components with the third ontological component to a lower dimensional common space.

[0045] In some implementations, adding the unique information corresponding to the object-of- interest to the unique information database comprises ID information related to the object-of-interest.

[0046] In some implementations, ID information is selected from text, numbers, brand information, serial number, model information, time, date, location, distance and a combination thereof.

[0047] In some implementations, the method further comprises using the ID information to define closest-match data for configuring the routing operation, the encoding operations and the obtention of the ontological components. [0048] In some implementations, the method further comprises storing in the unique information database the weight and/or the bias value applied to the at least one of the ontological components.

[0049] In some implementations, the method further comprises storing in the unique information database the data augmentation applied to the at least one of the ontological components.

[0050] In some implementations, the method further comprises storing the list of connections between each of the ontological components and the encoding operations in the unique information database.

[0051] In some implementations, the unique information comprises one or more of dense representations of the object-of-interest, at least one encoding method, encoding pre-configuration to differentiate the ontological components, training data, loopback routine parameters, influence parameters, and data fusion parameters.

[0052] In accordance with a second aspect of the present technology, there is provided a computer-implemented method for authenticating an object-to-be- inferred, the method comprising: generating first unique information associated with an object-of-interest using the method for generating unique information associated with an object-of-interest; generating second unique information associated with the object-to-be-inferred using the obtaining operation, the routing operation, and the encoding operations of for generating unique information associated with an object-of-interest; and evaluating a confidence level for the object-to-be inferred by comparing the first unique information associated with the object-of-interest stored in the unique information database with the second unique information associated with the object-to-be inferred, wherein the object- to-be inferred is deemed authenticated if the confidence level at least meets or exceeds a predetermined confidence threshold.

[0053] In accordance with a third aspect of the present technology, there is provided a computer- implemented method for obtaining ontological components of an object-of-interest. Each component being captured by a sensor device.

[0054] In some implementations, the object-of-interest is subset of a plurality of objects. [0055] In some implementations, the ontological components are captured from a corresponding point of view, each component being associated with feature data, the feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, and/or motion.

[0056] In some implementations, the plurality of ontological components is a plurality of images, the feature data of a given image further comprises 3D coordinates of the corresponding point of view in a global coordinate system, 3D coordinates of a set of feature points of the object in the global coordinate system, the features points of the object being represented by feature data points in a 3D point cloud, for each feature point of the object, a list of 3D coordinates, an entry of said list being 3D coordinates indicative of a position of a corresponding feature data point in the global coordinate system for a corresponding image, generating, for each feature point of the object, a list of 2D coordinates.

[0057] In some implementations, the ontological component is raw data.

[0058] In some implementations, the ontological component is an embedding or other representation of raw data.

[0059] In some implementations, the ontological component is a learned representation.

[0060] In some implementations, the ontological component is specifically designed.

[0061] In some implementations, the ontological component is a one-dimensional (ID) representation.

[0062] In some implementations, the ontological component is a two-dimensional (2D) representation.

[0063] In some implementations, the ontological component is a partial two-dimensional (2D) representation.

[0064] In some implementations, the ontological component is a mesh.

[0065] In some implementations, the ontological component is a three-dimensional (3D) representation. [0066] In some implementations, the ontological component is a partial three-dimensional (3D) representation.

[0067] In some implementations, the ontological component is a continuous function such as, but not limited to, a radiance field.

[0068] In some implementations, the ontological component is an intermediate representation such as, but not limited to, a connectivity graph.

[0069] In some implementations, the ontological component is feature vector.

[0070] In some implementations, the ontological component is a plurality of ontological components.

[0071] In some implementations, the ontological component is a plurality of ontological components of the at least one resolution.

[0072] In some implementations, the ontological component is obtained in the virtual world.

[0073] In some implementations, the at least one influence parameter is applied to the ontological component. Examples of one or more influence parameter is a weight or bias. A weight decides the influence the input value will have on the output value. Biases, which are constant, do not contain incoming connections, but contain outgoing connections with their own weights.

[0074] In some implementations, the at least one influence parameter is specified.

[0075] In accordance with a fourth aspect of the present technology, there is provided a computer-implemented method to augment the ontological components.

[0076] In some implementations the augmentation may include various types of noise.

[0077] In some implementations, the noise generator may generate the noise based on a random parameter.

[0078] In some implementations, the noise added by the noise generator is constrained such that perturbation associated with the augmented ontological component is within a finite envelope. [0079] In some implementations, the noise generated by the noise generator may include, but is not limited to, White Gaussian noise, Voroni noise and/or Fractal noise. Additionally, common D- related noise (e.g., Salt and Pepper, film grain, fixed-pattern, Perlin, simplex and/or Poisson noise) are applicable on the surface of a 3D model and generated by the noise generator.

[0080] In some implementations, the augmentation may include geometric transformation.

[0081] In some implementations, the geometric transformation may apply one or more geometric transformations on the ontological component. Examples of one or more geometric transformations include, but are not limited to, changing the size of the ontological component, applying a rotation to the ontological component, applying shifting and/or translation to the ontological component, scaling the ontological component, rotating the ontological component, translating the ontological component, applying a scaling translation to the ontological component, shearing the ontological component, applying a non-uniform scaling to the ontological component, forming a matrix representation of the ontological component, applying a 3D affine transformation to the ontological component, applying matrix operations to the ontological component, applying a reflection to the ontological component, dilating the ontological component, applying a tessellation to the ontological component, applying a projection to the ontological component, or some combination thereof. It is to be noted that any one or more these transformations may be applied and in case of more than one transformation, the order in which these transformations may be applied should not limit the scope of the present disclosure.

[0082] In some implementations, the geometric transformation may include one or more geometric transformations based on a random parameter. For example, the geometric transformation random parameter associated with applying the rotation may be a random angle in between 0° to 360°. As another example, the geometric transformation random parameter associated with the shifting parameters may be conditioned according to a pre-defined world/scene maximum size and avoiding the intersections between each 3D object's own bounding box. For different types of geometric transformations, the geometric transformation random parameter may have different values.

[0083] In some implementations, the augmentation may include a sampling strategy.

[0084] In accordance with a fifth aspect of the present technology, there is provided a computer- implemented method to route the ontological components. [0085] In some implementations, the routing method comprises a predefined list of connections between the ontological components and encoding block.

[0086] In some implementations, the routing method is determined by the at least one ontological component.

[0087] In some implementations, the routing method is further determined by the at least one of the plurality of ontological components.

[0088] In some implementations, the routing method is further determined by the at least one influence parameter.

[0089] In some implementations, the routing method is further determined by the at least one data fusion method.

[0090] In some implementations, the routing method is specified.

[0091] In some implementations, the routing method contains the at least one channel.

[0092] In accordance with a sixth aspect of the present technology, there is provided a computer- implemented method to encode the ontological components.

[0093] In some implementations, the encoding operation may be comprised of the at least one encoder, neural network, any learned network, machine learning algorithm, plurality of encoding operations and/or array of encoding operations.

[0094] In some implementations, the encoding operation may be pre- configured.

[0095] In some implementations, the encoding operation may be pre-configured to differentiate the at least one ontological component modality, such as, but not limited to, a complete object, a partial object, the at least one resolution, a plurality of ontological component modalities, 3D information, 2D information, ID information, raw data, intermediate representations, and/or feature vectors.

[0096] In some implementations, the encoding operation is trained.

[0097] In some implementations, the encoding operation is retrained for each object-of-interest. [0098] In some implementations, the encoding operation is further determined by the at least one influence parameter. Examples of one or more influence parameter is a weight or bias.

[0099] In some implementations, the at least one influence parameter is specified.

[00100] In some implementations, the encoding operation is further determined by the at least one data fusion method.

[00101] In some implementations, the encoding operation is further comprised of the at least one loopback routine; the loopback routine employing random parameters. Examples of random parameters are sampling method, orientation and/or neural network randomization.

[00102] In some implementations the number of cycles of the loopback routine of the encoding operation is specified.

[00103] In some implementations, the encoding generates a representation of the object-of- interest.

[00104] In some implementations, the encoding generates a dense representation of the object-of- interest.

[00105] In some implementations, the encoding operation provides unique data such as, but not limited to, a feature vector, values with delta and/or confidence values.

[00106] In some implementations, ID data associated with the object-of-interest is associated to the unique data provided by the encoding operation.

[00107] In some implementations the ontological component modalities, augmentation parameters, influence parameters, fusion methods and encoder operation parameters are associated to the unique data provided by the encoding operation associated to the object-of-interest.

[00108] In some implementations, the encoding operation is further comprised of the at least one loopback routine associated with the database; the loopback routine employing parameters such as, but not limited to a time stamp and/or feature vector. [00109] In accordance with a seventh aspect of the present technology, there is provided a computer-implemented method for the unique data to generate a dataset.

[00110] In some implementations, the unique data and any data associated with the object-of- interest defines a dataset.

[00111] In some implementations, the dataset is stored in a database.

[00112] In some implementations, the dataset and database are dynamic.

[00113] In some implementations, a dataset is used to determine ontological components, a routing strategy, an encoding operation, influence parameters and a data fusion method.

[00114] In some implementations, ID information is used to obtain a dataset to determine ontological components, a routing strategy, an encoding operation, influence parameters and a data fusion method.

[00115] In some implementations, subsequent to a dataset being used to determine ontological components, a routing strategy, an encoding operation, influence parameters and a data fusion method, a confidence value is obtained from the encoding operation.

[00116] In some implementations, subsequent to a dataset being used to determine ontological components, a routing strategy, an encoding operation, influence parameters and a data fusion method, a closest-match dataset to the dataset obtained from the encoding operation is retrieved from the database.

[00117] In some embodiments, wherein the closest-match dataset is considered equivalent to the dataset obtained from the encoding operation, the object associated with the closest-match dataset and the dataset obtained from the encoding operation are considered to be equivalent.

[00118] Other aspects and embodiments of the instant inventive concepts will be provided in detail by the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS

[00119] The features, aspects, and advantages of the present disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings, in which:

[00120] FIG. 1A-B are illustrative of an environment for executing a method of ontological recognition according to an embodiment of the present technology;

[00121] FIG. 2A depicts a high-level functional block diagram of the method of ontological recognition to obtain unique object information, in accordance with the embodiments of the present disclosure;

[00122] FIG. 2B depicts a high-level functional block diagram of the method of ontological recognition to obtain confidence values and/or predictions, in accordance with the embodiments of the present disclosure;

[00123] FIG. 3 depicts ontological components within the method of ontological recognition, a plurality of encoders contained within the encoding operations as well as a loopback routine associated with and encoder in accordance with the embodiments of the present disclosure;

[00124] FIG. 4 depicts augmentation operations, in accordance with various embodiments of the present disclosure;

[00125] FIG. 5 depicts data fusion method 1, in accordance with various embodiments of the present disclosure;

[00126] FIG. 6 depicts data fusion method 2, in accordance with various embodiments of the present disclosure; and

[00127] FIG. 7 depicts data fusion method 3, in accordance with various embodiments of the present disclosure.

DETAILED DESCRIPTION

[00128] Various exemplary embodiments of the described technology will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. The present inventive concept may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that the disclosure will be thorough and complete, and will fully convey the scope of the present inventive concept to those skilled in the art. In the drawings, the sizes and relative sizes of layers and regions may be exaggerated for clarity. Like numerals refer to like elements throughout.

[00129] It will be understood that, although the terms first, second, third etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first element discussed below could be termed a second element without departing from the teachings of the present inventive concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

[00130] It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between" versus "directly between," "adjacent" versus "directly adjacent," etc.).

[00131] The terminology used herein is only intended to describe particular exemplary embodiments and is not intended to be limiting of the present inventive concept. As used herein, the singular forms "a," "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[00132] Moreover, all statements herein reciting principles, aspects, and implementations of the present technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof, whether they are currently known or developed in the future. Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the present technology. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo-code, and the like represent various processes which may be substantially represented in computer-readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

[00133] The functions of the various elements shown in the figures, including any functional block labeled as a "processor", may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. In some embodiments of the present technology, the processor may be a general purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a digital signal processor (DSP). Moreover, explicit use of the term a "processor" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.

[00134] Software modules, or simply modules, which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating the specified functionality and performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown. Moreover, it should be understood that these modules may, for example include, without being limitative, computer program logic, computer program instructions, software, stack, firmware, hardware circuitry or any combinations thereof that are configured to provide the required capabilities and specified functionality.

[00135] Ontological components, herein, will include, but are not limited to, color, color transformations, depth, heat, 2D images, partial 2D images, 3D point clouds, partial 3D point clouds, a mesh, a continuous function, 3D descriptors, shape descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, and/or motion.

[00136] Given this understanding, the inventive aspects and embodiments of the present technology are presented in the following disclosures. [00137] With reference to FIG. 1A-B, there is shown a device 10 suitable for use in accordance with at least some embodiments of the present technology. It is to be expressly understood that the device 10 as depicted is merely an illustrative implementation of the present technology. In some cases, what are believed to be helpful examples of modifications to the device 10 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and, as a person skilled in the art would understand, other modifications are likely possible. Further, where this has not been done (i.e., where no examples of modifications have been set forth), it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology. As a person skilled in the art would understand, this is likely not the case. In addition, it is to be understood that the device 10 may provide in certain instances simple implementations of the present technology, and that where such is the case they have been presented in this manner as an aid to understanding. As persons skilled in the art would understand, various implementations of the present technology may be of a greater complexity.

[00138] FIGs. 1A-B provide a schematic representation of a device 10 configured for generating and/or processing a three-dimensional (3D) point cloud in accordance with an embodiment of the present technology. The device 10 comprises a computing unit 100 that may receive captured images of an object to be characterized. The computing unit 100 may be configured to generate the 3D point cloud as a representation of the object to be characterized. The computing unit 100 is described in greater details hereinbelow.

[00139] In some embodiments, the computing unit 100 may be implemented by any of a conventional personal computer, a controller, and/or an electronic device (e.g., a server, a controller unit, a control device, a monitoring device etc.) and/or any combination thereof appropriate to the relevant task at hand. In some embodiments, the computing unit 100 comprises various hardware components including one or more single or multi-core processors collectively represented by a processor 110, a solid-state drive 150, a random access memory (RAM) 130, a dedicated memory 140 and an input/output interface 160. The computing unit 100 may be a computer specifically designed to operate a machine learning algorithm (MLA) and/or deep learning algorithms (DLA). The computing unit 100 may be a generic computer system. [00140] In some other embodiments, the computing unit 100 may be an "off the shelf' generic computer system. In some embodiments, the computing unit 100 may also be distributed amongst multiple systems. The computing unit 100 may also be specifically dedicated to the implementation of the present technology. As a person in the art of the present technology may appreciate, multiple variations as to how the computing unit 100 is implemented may be envisioned without departing from the scope of the present technology.

[00141] Communication between the various components of the computing unit 100 may be enabled by one or more internal and/or external buses 170 (e.g. a PCI bus, universal serial bus, IEEE 1394 "Firewire" bus, SCSI bus, Serial- ATA bus, ARINC bus, etc.), to which the various hardware components are electronically coupled.

[00142] The input/output interface 160 may provide networking capabilities such as wired or wireless access. As an example, the input/output interface 160 may comprise a networking interface such as, but not limited to, one or more network ports, one or more network sockets, one or more network interface controllers and the like. Multiple examples of how the networking interface may be implemented will become apparent to the person skilled in the art of the present technology. For example, but without being limitative, the networking interface may implement specific physical layer and data link layer standard such as Ethernet, Fibre Channel, Wi-Fi or Token Ring. The specific physical layer and the data link layer may provide a base for a full network protocol stack, allowing communication among small groups of computers on the same local area network (LAN) and large-scale network communications through routable protocols, such as Internet Protocol (IP).

[00143] According to implementations of the present technology, the solid-state drive 120 stores program instructions suitable for being loaded into the RAM 130 and executed by the processor 110. Although illustrated as a solid-state drive 150, any type of memory may be used in place of the solid- state drive 150, such as a hard disk, optical disk, and/or removable storage media. According to implementations of the present technology, the solid-state drive 120 stores program instructions suitable for being loaded into the RAM 130 and executed by the processor 110 for executing generation of 3D representation of objects. For example, the program instructions may be part of a library or an application. [00144] The processor 110 may be a general-purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a digital signal processor (DSP). In some embodiments, the processor 110 may also rely on an accelerator 120 dedicated to certain given tasks, such as executing the methods set forth in the paragraphs below. In some embodiments, the processor 110 or the accelerator 120 may be implemented as one or more field programmable gate arrays (FPGAs). Moreover, explicit use of the term "processor", should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, application specific integrated circuit (ASIC), read-only memory (ROM) for storing software, RAM, and non-volatile storage. Other hardware, conventional and/or custom, may also be included.

[00145] In some embodiments, the method of ontological recognition 100 may be implemented by an imaging device or any sensing device configured to optically sense or detect certain features of an object-of-interest, such as, but not limited to, a camera, a video camera, a microscope, endoscope, etc. In some embodiments, imaging systems may be implemented as a user computing and communication- capable device, such as, but not limited to, a camera, a video camera, endoscope, a mobile device, tablet device, a microscope, server, controller unit, control device, monitoring device, etc.

[00146] The device 10 comprises an imaging system 18 that may be configured to capture Red- Green-Blue (RGB) images. As such, the device 10 may be referred to as the "imaging mobile device" 10. The imaging system 18 may comprise image sensors such as, but not limited to, Charge-Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) sensors and/or digital cameras. Imaging system 18 may convert an optical image into an electronic or digital image and may send captured images to the computing unit 100. In the same or other embodiments, the imaging system 18 may be a single-lens camera providing RGB pictures. In some embodiments, the device 10 comprises depth sensors to acquire RGB-Depth (RGBD) pictures. Broadly speaking, any device suitable for generating a 3D point cloud may be used as the imaging system 18 including but not limited to depth sensors, 3D scanners or any other suitable devices.

[00147] In some embodiments, as depicted in FIG. IB, the device 10 may communicatively access an external imaging system 19 such as, but not limited to, a camera, a video camera, a microscope, endoscope, a Charge-Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) sensors and/or digital cameras. [00148] In the same or other embodiments, imaging system 19 may send captured data to the computing unit 100. In the same or other embodiments, imaging system 19 may convert an optical image into an electronic or digital image and may send captured images to the computing unit 100.

[00149] In the same or other embodiments, the imaging system 19 may be a single-lens camera providing RGB pictures. In some embodiments, the device 10 comprises depth sensors to acquire RGB- Depth (RGBD) pictures. Broadly speaking, any device suitable for generating a 3D point cloud may be used as the imaging system 19 including but not limited to depth sensors, 3D scanners or any other suitable devices.

[00150] The device 10 may comprise an Inertial Sensing Unit (ISU) 14 configured to be used in part by the computing unit 100 to determine a position of the imaging system 18 and/or the device 10. Therefore, the computing unit 100 may determine a set of coordinates describing the location of the imaging system 18, and thereby the location of the device 10, in a coordinate system based on the output of the ISU 14. Generation of the coordinate system is described hereinafter. The ISU 14 may comprise 3 -axis accelerometer(s), 3 -axis gyroscope(s), and/or magnetometer(s) and may provide velocity, orientation, and/or other position related information to the computing unit 100.

[00151] The ISU 14 may output measured information in synchronization with the capture of each image by the imaging system 18. The ISU 14 may be used to determine the set of coordinates describing the location of the device 10 for each captured image of a series of images. Therefore, each image may be associated with a set of coordinates of the device 10 corresponding to a location of the device 10 when the corresponding image was captured. Furthermore, information provided by the ISU may be used to determine a coordinate system and/or a scale corresponding of the object to be characterized. Other approaches may be used to determine said scale, for instance by including a reference object whose size is known in the captured images, near the object to be characterized.

[00152] Further, the device 10 may include a screen or display 16 capable of rendering color images, including 3D images. In some embodiments, the display 16 may be used to display live images captured by the imaging system 18, 3D point clouds, Augmented Reality (AR) images, Graphical User Interfaces (GUIs), program output, etc. In some embodiments, display 16 may comprise and/or be housed with a touchscreen to permit users to input data via some combination of virtual keyboards, icons, menus, or other Graphical User Interfaces (GUIs). In Some embodiments, display 16 may be implemented using a Liquid Crystal Display (LCD) display or a Light Emitting Diode (LED) display, such as an Organic LED (OLED) display. In other embodiments, display 16 may be remotely communicatively connected to the device 10 via a wired or a wireless connection (not shown), so that outputs of the computing unit 100 may be displayed at a location different from the location of the device 10. In this situation, the display 16 may be operationally coupled to, but housed separately from, other functional units and systems in device 10. The device 10 may be, for example, an iPhone or mobile phone from Apple or a Galaxy mobile phone or tablet from Samsung, or any other mobile device whose features are similar or equivalent to the aforementioned features. The device may be, for example and without being limitative, a handheld computer, a personal digital assistant, a cellular phone, a network device, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a network base station, a media player, a navigation device, an e-mail device, a game console, or a combination of two or more of these data processing devices or other data processing devices.

[00153] The device 10 may comprise a memory 12 communicatively connected to the computing unit 100 and configured to store without limitation data, captured images, depth values, sets of coordinates of the device 10, 3D point clouds, and raw data provided by ISU 14 and/or the imaging system 18. The memory 12 may be embedded in the device 10 as in the illustrated embodiment of Figure 2 or located in an external physical location. The computing unit 100 may be configured to access a content of the memory 12 via a network (not shown) such as a Local Area Network (LAN) and/or a wireless connexion such as a Wireless Local Area Network (WLAN).

[00154] The device 10 may also includes a power system (not depicted) for powering the various components. The power system may include a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter and any other components associated with the generation, management and distribution of power in mobile or non-mobile devices.

[00155] As such, in at least some embodiments, the device 10 may also be suitable for generating the 3D point cloud, based on images of the object. Such images may have been captured by the imaging system 18. As an example, the device 10 may generate the 3D point cloud according to the teachings of the Patent Cooperation Treaty Patent Publication No. 2020/240497, the disclosure of which is incorporated by reference herein it is entirety. [00156] Summarily, it is contemplated that the device 10 may perform the operations and steps of methods described in the present disclosure. More specifically, the device 10 may be suitable for capturing images of the object to be characterized, generating a 3D point cloud including data points and representative of the object, and executing methods for characterization of the 3D point cloud. In at least some embodiments, the device 10 is communicatively connected (e.g. via any wired or wireless communication link including, for example, 4G, LTE, Wi-Fi, or any other suitable connection) to an external computing device 23 (e.g. a server) adapted to perform some or all of the methods for characterization of the 3D point cloud. As such, operation of the computing unit 100 may be shared with the external computing device 23.

[00157] In this embodiment, the device 10 accesses the 3D point cloud by retrieving information about the data points of the 3D point cloud from the RAM 130 and/or the memory 12. In some other embodiments, the device 10 accesses a 3D point cloud by receiving information about the data points of the 3D point cloud from the external computing device 23.

[00158] With reference to FIGs. 2A-B, there is depicted a flow diagram of a computer- implemented method 200 suitable for use in accordance with at least some embodiments of the present technology. It is to be expressly understood that the method 200 as depicted are merely an illustrative implementation of the present technology. In some cases, what are believed to be helpful examples of modifications to the method 200 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and, as a person skilled in the art would understand, other modifications are likely possible. Further, where this has not been done (i.e., where no examples of modifications have been set forth), it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology. As a person skilled in the art would understand, this is likely not the case. In addition, it is to be understood that the device 10 may provide in certain instances simple implementations of the present technology, and that where such is the case they have been presented in this manner as an aid to understanding. As persons skilled in the art would understand, various implementations of the present technology may be of a greater complexity.

[00159] FIG. 2A-B depicts a flow diagram of a computer-implemented method 200 to obtain unique information pertaining to an object-of-interest, in accordance with the embodiments of the present disclosure. In one or more aspects, the method 200 or one or more steps thereof may be performed by a computer system, such as the computer system 100. The method 200, or one or more steps thereof, may be embodied in computer-executable instructions that are stored in a computer-readable medium, such as a non-transitory mass storage device, loaded into memory and executed by a CPU. Some steps or portions of steps in the flow diagram may be omitted or changed in order.

[00160] As shown, the method of ontological recognition 200 incorporates a non-synthetic object- of-interest 201, a synthetic object-of-interest 202, a capture module 203, a virtual capture module 204, an ontological component database 205, an augmentation module 206, a routing module 207, an encoding operations module 208, an influence parameter module 209, a data fusion module 210, a unique information database 211 and a confidence and/or prediction module 212. The elements of the method of ontological recognition 200 will be described in detail below.

[00161] The method of ontological recognition 200 may be applied to non-synthetic objects and/or synthetic objects. In some embodiments, synthetic objects may comprise, for example, synthetic 3D models, CAD, 3D models acquired from industrial oriented 3D software, or medical oriented 3D software, and/or non-specialized 3D software, 3D models generated by processes such as RGB photogrammetry, RGB-D photogrammetry, and/or other reconstruction techniques from real objects. In some embodiments, non-synthetic objects may comprise any 3D point cloud generated by operations such as: RGB based-sensors; photogrammetry techniques such as Colmap™, Visual SFM™, Open3D™; computational depth-based techniques such as machine learning-based depth, and disparity-based depth utilizing stereo cameras or multiple positions of a single camera; and RGB-D sensors, such as LiDAR and/or depth cameras, etc.

[00162] In the context of the present technology, a "non-synthetic object" may refer to any object in the real- world. Non-synthetic objects are not synthesized using any computer rendering techniques rather are scanned, captured by any non-limiting means such as using suitable sensor, such as camera, optical sensor, depth sensor or the like, to generate or reconstruct 3D point cloud representation of the non-synthetic 3D object using any "off the shelf' technique, including but not limited to photogrammetry, machine learning based techniques, depth maps or the like. Certain non-limiting examples of a non-synthetic 3D object may be any real-world objects such as a computer screen, a table, a chair, a coffee mug, a mechanical component on an assembly line, or any type of inanimate object or entity. Without limitations, a non-synthetic 3D object may also by an animated entity such as an animal, a plant, a human entity, or a portion thereof.

[00163] The method 200 begins with using the capture module 203 and/or the virtual capture module 204 to obtain the ontological components of an object-of-interest. The object-of-interest may be non-synthetic and/or synthetic. The ontological components of the object may be detected, identified and/or tracked by using MLA and/or Augmented Reality techniques.

[00164] In this embodiment, the capture module 203 may comprise an imaging system such as the imaging systems 18, 19. More specifically, a plurality of representations of the object have been captured, each representation having been captured by an imaging device (e.g. the imaging system 18 or the external imaging system 19) from a corresponding point of view, each representation being associated with feature data, the feature data including information about 3D coordinates of the corresponding point of view in a global coordinate system, and 3D coordinates of a set of feature points of the object in the global coordinate system.

[00165] In some embodiments, the capture module 203 is associated with pre-determined capture settings relative to the imaging system 18, 19 and the object-of-interest. The pre- determined capture settings comprise information about required parameters of the imaging system 18, 19 and required parameters of the imaging system 18, 19 relative to the object-of-interest in order to capture ontological components associated with the object-of-interest. In those embodiments, an execution by the capture module 203 may comprise applying a setting configuration on the imaging system 18, 19 such that the capture setting parameters thereof match pre-determined capture setting parameters of the capture module 203. For example, the capture settings may comprise parameters such as, for example, extrinsic and/or intrinsic parameters of the imaging system 18, 19, position, orientation, aiming point, focal length, focal length range, pixel size, sensor size, position of the principal point, and/or lens distortion.

[00166] The capture settings may also comprise information regarding the position of the imaging system 18, 19 relative to the object-of-interest. As such, applying the setting configuration may comprise applying rotation, shifting and/or translation operations, and combinations thereof to the imaging system 18, 19 relative to the object-of-interest.

[00167] The capture settings may also comprise information regarding the position of the object- of-interest relative to imaging system 18, 19. As such, applying the setting configuration may comprise applying rotation, shifting and/or translation operations, and combinations thereof to the object-of- interest relative to imaging system 18, 19.

[00168] Alternatively, in other embodiments, the virtual capture module 204 may comprise a virtual camera and/or cameras acquiring a virtual/ synthetic representation of the object-of-interest. More specifically, a plurality of representations of the object-of-interest have been captured, each representation having been captured by a virtual camera from a corresponding point of view, each representation being associated with feature data, the feature data including information about 3D coordinates of the corresponding point of view in a global coordinate system, and 3D coordinates of a set of feature points of the object in the global coordinate system.

[00169] In some embodiments, the virtual capture module 204 may comprise specified parameters. The specified parameters may include, but are not limited to, location, number and/or parameters of the virtual cameras; aiming point and/or object region and/or regions-of-interest.

[00170] The method of 200 continues with the ontological components being stored in the ontological component database 205. Referring to FIG. 3, in accordance with the embodiments of the present disclosure, database 405 is an embodiment of the ontological component database 205 wherein an ontological component database 405 comprises a plurality of ontological components ‘a’ to ‘x’.

[00171] In some embodiments, operation of the ontological component database 205 is further determined by the at least one influence parameter available from the influence parameter module 209. Examples of one or more influence parameter may be a weight or bias. A weight controls the signal strength, and/or decides the influence the input value will have on the output value. Biases, which are constant, do not contain incoming connections, but contain outgoing connections with their own corresponding weights. An illustrative example of an embodiment may be to determine that 3D representations will possess greater influence in subsequent operations. Another illustrative example may be that color is determined to possess greater influence, and depth is determined to possess lesser influence in subsequent operations.

[00172] In some embodiments, operation of the ontological component database 205 is prespecified. For example, if the object-of-interest is a cube, it could be prespecified that the ontological component database 205 may not consider curvature or angles other than 90°. [00173] In some embodiments, the method of 200 continues with using the augmentation module 206 providing data augmentation of each ontological component. In some embodiments, said augmentation involves various types of noise, and/or geometric transformation, and/or a sampling strategy. As shown in FIG. 4, in accordance with the embodiments of the present disclosure, augmentation module 500 incorporates noise generation module 501, geometric transformation generator module 502, and sampling strategy module 503.

[00174] In some embodiments, the augmentation module 206 may provide various types of noise. In some implementations, the noise generator may generate the noise based on a random parameter. In some embodiments, the noise added by the noise generator is constrained such that perturbation associated with the augmented ontological component is within a finite envelope. In some implementations, the noise generated by the noise generator may include, but not limited to, White Gaussian noise, Voroni noise and/or Fractal noise. Additionally, common 2D-related noise (e.g., Salt and Pepper, film grain, fixed-pattern, Perlin, simplex and/or Poisson noise) are applicable on the surface of a 3D model and generated by the noise generator.

[00175] In some embodiments, the augmentation module 206 may provide geometric transformation wherein the geometric transformation may apply one or more geometric transformations on the ontological component. Examples of one or more geometric transformations may include, but are not limited to, changing the size of the ontological component, applying a rotation to the ontological component, applying shifting and/or translation to the ontological component, scaling the ontological component, rotating the ontological component, translating the ontological component, applying a scaling translation to the ontological component, shearing the ontological component, applying a non- uniform scaling to the ontological component, forming a matrix representation of the ontological component, applying a 3D affine transformation to the ontological component, applying matrix operations to the ontological component, applying a reflection to the ontological component, dilating the ontological component, applying a tessellation to the ontological component, applying a projection to the ontological component, or some combination thereof. It is to be noted that any one or more these transformations may be applied and in case of more than one transformation, the order in which these transformations may be applied should not limit the scope of the present disclosure.

[00176] In some embodiments, the augmentation module 206 may apply one or more geometric transformations based on a random parameter. For example, the geometric transformation random parameter associated with applying the rotation may be random angle in between 0° to 360°. As another example, the geometric transformation random parameter associated with the shifting parameters may be conditioned according to a pre-defined world/scene maximum size and avoiding the intersections between each 3D object's own bounding box. For different types of geometric transformations, the geometric transformation random parameter may have different values.

[00177] In some embodiments, the augmentation module 206 may apply a sampling strategy, such as, but not limited to, farthest point sampling, random sampling, and/or feature-based sampling.

[00178] In some embodiments, operation of the augmentation module 206 is further determined by the at least one influence parameter available from the influence parameter module 209. An illustrative example of an embodiment may be to determine the probability, percentage or intensity of the added noise, or a combination thereof. A specified percentage of the selected noise could be added to each point in a 3D point cloud. Another representative example may be to add a specified percentage or intensity of noise to each point of a 3D point cloud based on a probability, for example, every 10 th point. A further example may be to determine the probability, percentage or intensity of a geometric transformation, wherein, concerning the removal of planes, the intensity of plane removal may be specified as a percentage of removal at each point within a 3D point cloud and/or as a probability of the number of points to receive plane removal, e.g. every 100 th point.

[00179] In some embodiments, the augmentation module 206 may be omitted.

[00180] The method 200 continues with the routing module 207, wherein the routing module 207 defines a predefined list of connections between the ontological component database 205 and the encoding operations module 208.

[00181] In some embodiments, operation of the routing module 207 is determined by the at least one ontological component. For example, the routing module could provide the modalities of 3D point clouds and exclude all other ontological components. In another example, the routing module 207 could provide the modalities of partial 2D images and partial 3D point clouds and exclude all other ontological components.

[00182] In some embodiments, the routing module 207 is further determined by the at least one plurality of ontological components. For example, the routing module 207 may determine modalities of 2D images and 3D point clouds as a plurality of ontological components available to the encoding block. Another illustrative example is to determine the modalities of color, depth, and 3D point clouds as a plurality of ontological components available to the encoding block. In a further illustrative example, the routing module 207 determines a first plurality of 2D images and 3D point clouds available to the encoding block along with a second plurality of color, depth and partial 2D images as available to the encoding block.

[00183] In some embodiments, the routing module 207 is further determined by the at least one influence parameter available from the influence parameter module 209. An illustrative example of an embodiment may be to determine that, regardless of previous influence operations, partial 3D representations will possess greater influence in subsequent operations. Another illustrative example may be that a plurality of depth and color is determined to possess greater influence in subsequent operations, and 3D representations is determined to possess lesser influence in subsequent operations. Another illustrative example may be that a plurality of depth and color is determined to possess no influence in subsequent operations, and a plurality of 3D descriptors and 2D images is determined to possess lesser influence in subsequent operations.

[00184] In some embodiments, operation of the routing module 207 is further determined by the data fusion module 210. Other aspects and embodiments of the data fusion module 210 will be provided in detail in later sections of the present disclosure.

[00185] In some embodiments, the routing module 207 is specified. For example, if the object-of- interest is a cube, it could be determined that the routing module 207 may not consider curvature or angles other than 90°.

[00186] In some implementations, the routing module 207 comprises the at least one channel. For example, the routing module may comprise multiple channels each possessing the at least one ontological component. A further illustrative example depicts the routing module 207 comprising 2 channels (not shown) in which channel 1 possesses 3D point clouds and channel 2 possesses 2D images.

[00187] In some embodiments, the routing module comprises multiple channels each possessing the at least one plurality of ontological components. An illustrative example embodies the routing module 207 defining 2 channels; channel 1 possessing a plurality of 3D point clouds and 2D images; and channel 2 possessing color. A further illustrative example embodies the routing module 207 defining 2 channels; channel 1 possessing the plurality of 3D point clouds and 2D images; and channel 2 possessing the plurality of color and depth. A further illustrative example embodies the routing module 207 defining 3 channels; channel 1 possessing the plurality of 3D point clouds, 2D images, heat and partial 2D images; channel 2 possessing the plurality of color and depth; and channel 3 possessing curvature.

[00188] The method 200 continues with encoding operations, at the encoding operations module 208. In some embodiments the encoding operations module 208 generates a dense representation of the object-of-interest such as, but not limited to, a vector, vectors, a feature vector, feature vectors, an array of numeric values, a confidence value, an array of confidence values, a value with a delta, and/or an array of values with a delta.

[00189] In some embodiments, the encoding operations module 208 may comprise the at least one encoder, neural network, any learned network, machine learning algorithm, and/or array of encoding operations.

[00190] In some embodiments, the encoding operations module 208 may comprise a plurality of encoders, neural networks, any learned networks, machine learning algorithm, and/or array of encoding operations. An illustrative example of an embodiment is depicted in FIG. 3, wherein encoding operations 408 comprises of a plurality of encoders from 1 to ‘n’.

[00191] In some embodiments, the encoding operations module 208 may be pre-configured. For example, in some embodiments, the encoding operations module 208 may be pre-configured to differentiate the at least one ontological component, such as, but not limited to, a complete object, a partial object, the at least one resolution, a plurality of ontological component modalities, 3D information, 2D information, ID information, raw data, intermediate representations, and/or feature vectors. Referring to FIG. 3, encoder 4081 may be pre-configured to possess ontological components 4051 ‘a’, made available from database 405, by routing method 407. To further illustrate this example, encoder 4082 may possess ontological component, 4052 ‘b’; and encoder 4083 may possess 4052 ‘b’ and 4053 ‘x’.

[00192] In some embodiments, the encoding operations module 208 may be pre-configured to differentiate the at least one plurality of ontological components. Referring to FIG. 3, encoder 4081 may be pre-configured to possess ontological components 4051 ‘a’ and 4052 ‘b’, made available from 405, by routing method 407. To further illustrate this example, encoder 4082 may possess ontological components 4051 ‘a’, 4052 ‘b’ and 4053, ‘x’; and encoder 4083 may possess 4052 ‘b’ and 4053 ‘x’ .

[00193] In some embodiments, the encoding operations module 208 is trained.

[00194] In some embodiments, the encoding operations module 208 is retrained for each object- of-interest.

[00195] In some embodiments, operation of the encoding operations module 208 is further determined by the at least one influence parameter available from the influence parameter module 209. Examples of one or more influence parameter is a weight or bias. An illustrative example of an embodiment may be to determine that, regardless of previous influence operations, partial 3D representations will possess greater influence in subsequent operations. Another illustrative example may be that a plurality of depth and color is determined to possess greater influence in subsequent operations, and 3D representations is determined to possess lesser influence in subsequent operations. Another illustrative example may be that a plurality of depth and color is determined to possess no influence in subsequent operations, and a plurality of 3D descriptors and 2D images is determined to possess lesser influence in subsequent operations.

[00196] Referring to FIG. 3, A further illustrative example of an embodiment may be to determine that ontological component 4051 ‘a’ will possess greater influence in subsequent operations. Another illustrative example may be that ontological components 4051 ‘a’ and 4053 ‘x’ are determined to possess greater influence in subsequent operations, and that ontological component 4052 ‘b’ is determined to possess lesser influence in subsequent operations.

[00197] In some embodiments, the at least one influence parameter is specified.

[00198] In some embodiments, the encoding operations module 208 is further determined by the data fusion module 210. Other aspects and embodiments the data fusion module 210 will be provided in detail in later sections of the present disclosure.

[00199] In some embodiments, the encoding operations module 208 further implements at least one loopback routine. Referring to FIG. 3, 4081a depicts an encoder configured to comprise loopback routine 409. [00200] In some embodiments, loopback routine 409 utilizes random parameters. Non-limiting examples of random parameters may be sampling method, orientation and/or neural network randomization.

[00201] In some embodiments the number of cycles of the loopback routine 409 is specified.

[00202] The method 200 continues with the unique information database 211 storing information associated with an object-of-interest. In the current embodiment the information comprises information obtained from the non-synthetic object-of-interest 201, the synthetic object-of-interest 202, the capture module 203, the virtual capture module 204, the ontological component database 205, the augmentation module 206, the routing module 207, the encoding operations module 208, the influence parameter module 209, and/or the data fusion module 210.

[00203] In some embodiments, the information stored in the unique information database 211 is associated to a specified object-of-interest.

[00204] In some embodiments, the information stored in the unique information database 211 obtained from non-synthetic object-of-interest 201, synthetic object-of-interest 202 comprises ID information. For example, ID information may include, but is not limited to text, numbers, brand information, serial number, model information, time, date, location, and/or distance.

[00205] In some embodiments, the information stored in the unique information database 211 is obtained from the capture module 203 and/or the virtual capture module 204.

[00206] In some embodiments, the information stored in the unique information database 211 is obtained from the ontological component database 205. Information from the ontological component database 205 may include the ontological components associated with an object-of-interest. Further information obtained from the ontological component database 205 may include influence parameters.

[00207] In some embodiments, the information stored in the unique information database 211 is obtained from the augmentation module 206. The information obtained from the augmentation module 206 is associated with a specified object-of-interest. Information obtained from the augmentation module 206 may include the augmentation operations such as noise types, geometric transformation and/or a sample strategy. Further information obtained from the augmentation module 206 may include influence parameters and data fusion methods. [00208] In some embodiments, the information stored in the unique information database 211 is obtained from the routing module 207. The information obtained from the routing module 207 is associated with an object of interest. The information comprises a list of connections between the ontological component database 205 and the encoding operations module 208. Operation of the routing module 207 may be determined by the at least one ontological component. Operation of the routing module 207 may be determined by a plurality of ontological components, by influence parameters and/or by a data fusion operation.

[00209] In some embodiments, the information stored in the unique information database 211 is obtained from the encoding operations module 208. The information obtained from the encoding operations module 208 is associated with an object-of-interest. Information obtained from the encoding operations module 208 may include, but is not limited to, dense representations, the at least one encoder, the at least one plurality of encoders, the pre-configuration of encoders, the pre-configuration of encoders to differentiate the at least one ontological component and/or to differentiate the at least one plurality ontological components, training data, loopback routines and/or associated loopback routine parameters. Further information obtained from the encoding operations module 208 may include influence parameters and a data fusion operation.

[00210] In some embodiments, the information stored in the unique information database 211 is obtained from the influence parameter module 209.

[00211] In some embodiments, the information stored in the unique information database 211 is obtained from the data fusion module 210.

[00212] In some embodiments, the method of 200 utilizes the influence parameter module 209. Examples of one or more influence parameter is a weight or bias. A weight decides the influence the input value will have on the output value. Biases, which are constant, do not contain incoming connections, but contain outgoing connections with their own weights.

[00213] In some embodiments, influence parameters obtained from the influence parameter module 209 are specified.

[00214] In some embodiments, the method 200 utilizes the data fusion module 210. The data fusion module 210 provides methods for combining a plurality of data. [00215] In some embodiments, the data fusion module 210 may include, but is not limited to, the methods 600, 700 and 800 depicted in FIGs. 5 to 7.

[00216] Referring to FIG. 5, in accordance with the embodiments of the present disclosure, method 600 depicts input-level fusion. Method 600 may apply a fusion operation 603 to remove correlation data contained in ontological components 601 and 602, and/or fuse data at a lower dimensional common space. Method 600 may utilize operations such as, but not limited to, principal component analysis (PCA), canonical correlation analysis and/or independent component analysis.

[00217] Referring to FIG. 6, in accordance with the embodiments of the present disclosure, method 700 depicts intermediate fusion. Method 700 utilizes the at least one plurality of configurations of the encoding operations module 208 to obtain higher level representations of ontological components.

[00218] In some embodiments of method 700, each encoder may utilize linear and/or nonlinear function to combine different modalities into a single representation to enable an encoder to process a joint or shared representation of each modality. Differing modalities may be combined simultaneously into a single shared representation and/or combine one or multiple modalities in a step-wise fashion as illustrated in 700.

[00219] Referring to FIG. 7, in accordance with the embodiments of the present disclosure, method 800 depicts late and/or decision-level fusion which is subsequent to encoding operations by the encoding operations module 208. Method 800 may utilize operations such as, but not limited to, Bayes rules, max-fusion, and/or average-fusion.

[00220] The method 200 continues in FIG. 2B subsequent to storing information associated with an object-of-interest in the unique information database 211. Method 200 is further designed to provide an infrastructure to obtain confidence values and/or predictions pertaining to an object-to-be-inferred.

[00221] Referring to FIG. 2B, ID data associated with a non-synthetic object-of-interest 201a and/or a synthetic object-of-interest 202a is supplied to the unique information database 211. The closest- match ID data contained within the unique information database 211 is determined. The closest-match data may be utilized to configure the ontological component database 205, the routing module 207 and the encoding operations module 208. [00222] In some embodiments, the closest-match data is utilized to configure the capture module 203 and/or the virtual capture module 204.

[00223] In some embodiments, the augmentation module 206 may be omitted.

[00224] In some embodiments, the closest-match data is further utilized to determine influence parameters in the modules 205, 207 and 208.

[00225] In some embodiments, the closest-match data is further utilized to determine fusion operations in the routing module 207 and in the encoding operations module 208.

[00226] In some embodiments, the closest-match data is utilized to determine operations of the influence parameter module 209 and of the data fusion module 210.

[00227] In some embodiments, ID data associated with non-synthetic object-of-interest 201a and/or synthetic object-of-interest 202a is stored in the unique information database 211.

[00228] In some embodiments, the dense representation of the object-to-be-inferred generated by the encoding operations module 208 is stored in the unique information database 211 with the associated configuration data of method 200.

[00229] Subsequent to the configuration of method 200 by the closest-match ID data, the combination of one or more of the capture module 203, the virtual capture module 204, the ontological component database 205, the augmentation module 206, the routing module 207, the encoding operations module 208 , the influence parameter module 209, and/or the data fusion module 210 may be utilized to determine a confidence and/or prediction for a non-synthetic object- to-be-inferred 201a and/or the synthetic object-to-be-inferred 202a at the confidence and/or prediction module 212.

[00230] For example, in some embodiments, subsequent to the configuration of method 200 by the closest-match data, the ontological component database 205, the augmentation module 206, the routing module 207, the encoding operations module 208, the influence parameter module 209, the data fusion module 210 are utilized to determine a confidence and/or prediction for the non-synthetic object- to-be-inferred 201a and/or the synthetic object-to-be-inferred 202a at the confidence and/or prediction module 212. In particular, the object-to-be inferred may be authenticated using aspects of the method 200. Any variant of the method 200 as applied for generating unique information associated with the object-of-interest may also be employed for authenticating the object-to-be-inferred.

[00231] In one non-limiting example, after the object-of-interest has been processed using the method 200, its unique information (i.e. first unique information) has been stored in the unique information database 211. Thereafter, the same or a similar object, herein called the “object-to-be inferred”, is evaluated in order to determine its authenticity. Second unique information of the object-to- be-inferred is acquired using the method 200, without necessarily storing the second unique information in the unique information database 211. The first and second unique information are compared to evaluate a confidence level. If the first and second unique information match to a high level, the confidence level meets or exceeds a predetermined confidence threshold and the object-to-be inferred is authenticated as being the same as the object-of-interest. If the comparison does not meet the predetermined confidence threshold, the object- to-be inferred is not the same as the object-of-interest. For example and without limitation, the object-to-be inferred may be a counterfeit copy of the object-of- interest. In the context of the present technology, no limitation is intended to the type of object-of-interest and to the type of object-to-be inferred, which may include animated or inanimate, synthetic or nonsynthetic, and real of virtual objects.

[00232] In some embodiments, subsequent to the configuration of method 200 by the closest- match data, the ontological component database 205, the augmentation module 206, the routing module 207, the encoding operations module 208 are utilized to determine a confidence and/or prediction for the non-synthetic object-to-be-inferred 201a and/or the synthetic object- to-be-inferred 202a at the confidence and/or prediction module 212.

[00233] In some embodiments, subsequent to the configuration of method 200 by the closest- match data, the ontological component database 205, the routing module 207 and the encoding operations module 208 are utilized to determine a confidence and/or prediction for non-synthetic object- to-be-inferred 201a and/or synthetic object- to-be- inferred 202a at the confidence and/or prediction module 212.

[00234] In some embodiments, wherein the closest-match data contained in the unique information database 211 is considered equivalent to the confidence/prediction provided by the confidence and/or prediction module 212, the object-to-be-inferred and the object-of-interest are considered to be equivalent, the object-to-be-inferred being authenticated in embodiments where the present technology is used for authentication purposes.

[00235] In some embodiments, subsequent to the configuration of method 200 by the closest- match data, the encoding operations module 208 determines a dense representation of the object-to-be- inferred; wherein the dense representation is stored in the unique information database 211 with the associated configuration data of method 200.

[00236] It will be understood that the features and examples above are not meant to limit the scope of the present disclosure to a single implementation, as other implementations are possible by way of interchange of some or all of the described or illustrated elements. Moreover, where certain elements of the present disclosure can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present disclosure are described, and detailed descriptions of other portions of such known components are omitted so as not to obscure the disclosure. In the present specification, an implementation showing a singular component should not necessarily be limited to other implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, the present disclosure encompasses present and future known equivalents to the known components referred to herein by way of illustration.

[00237] The foregoing description of the specific implementations so fully reveals the general nature of the disclosure that others can, by applying knowledge within the skill of the relevant art(s) (including the contents of any documents cited and incorporated by reference herein), readily modify and/or adapt for various applications such specific implementations, without undue experimentation and without departing from the general concept of the present disclosure. Such adaptations and modifications are therefore intended to be within the meaning and range of the disclosed implementations, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance presented herein, in combination with the knowledge of one skilled in the relevant art(s). [00238] While the above-described implementations have been described and shown with reference to particular steps performed in a particular order, it will be understood that these steps may be combined, sub-divided, or re-ordered without departing from the teachings of the present technology. The steps may be executed in parallel or in series. Accordingly, the order and grouping of the steps is not a limitation of the present technology.

[00239] While various implementations of the present disclosure have been described above, it should be understood that they have been presented by way of example, and not limitation. It would be apparent to one skilled in the relevant art(s) that various changes in form and detail could be made therein without departing from the spirit and scope of the disclosure. Thus, the present disclosure should not be limited by any of the above-described implementations but should be defined only in accordance with the following claims and their equivalents.