Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INTEGRATING DATA ENTITY AND SEMANTIC ENTITY
Document Type and Number:
WIPO Patent Application WO/2017/123712
Kind Code:
A1
Abstract:
A hierarchical layered architecture may include structures, such as a structure of hierarchical resource data layer over semantic layer with distributed semantic leaves or a structure of semantic layer with graph store over hierarchical resource data layer. A parallel architecture may include data entity and semantic entity exchanging messages to support operations, such as semantic operations that may be based on data entity and data, and resource management that may be based on semantic mash-up.

Inventors:
LI QING (US)
LI XU (US)
WANG CHONGGANG (US)
Application Number:
PCT/US2017/013125
Publication Date:
July 20, 2017
Filing Date:
January 12, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONVIDA WIRELESS LLC (US)
International Classes:
G06F17/30; H04W4/70; G06F9/54; G06F21/62
Foreign References:
US20150227618A12015-08-13
Other References:
JI EUN KIM ET AL: "Seamless Integration of Heterogeneous Devices and Access Control in Smart Homes", INTELLIGENT ENVIRONMENTS (IE), 2012 8TH INTERNATIONAL CONFERENCE ON, IEEE, 26 June 2012 (2012-06-26), pages 206 - 213, XP032218223, ISBN: 978-1-4673-2093-1, DOI: 10.1109/IE.2012.57
SHANCANG LI ET AL: "The internet of things: a survey", INFORMATION SYSTEMS FRONTIERS, vol. 17, no. 2, 26 April 2014 (2014-04-26), NL, pages 243 - 259, XP055368460, ISSN: 1387-3326, DOI: 10.1007/s10796-014-9492-7
Attorney, Agent or Firm:
SAMUELS, Steven, B. et al. (US)
Download PDF:
Claims:
What is Claimed:

1. An apparatus comprising:

a processor; and

a memory coupled with the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising:

receiving, by a semantic layer from an external entity, a resource access request, wherein the semantic layer is an interface before the data layer;

responsive to receiving the resource access request, determining, by the semantic layer, that the resource access operation is allowed based on an access control policy of a triplestore; and

sending, by the semantic layer, the resource access request to a data layer for a resource of the resource access request to be obtained.

2. The apparatus according to any one of the preceding claims, further operations comprising:

receiving, by the semantic layer, a result of the resource access request from the data layer; and

sending, by the semantic layer to the external entity, a response that comprises the result of the resource access request from the data layer.

3. The apparatus according to any one of the preceding claims, wherein the external entity is a common services entity of another apparatus.

4. The apparatus according to any one of the preceding claims, further operations comprising linking triples of the triplestore of the semantic layer to a resource of the data layer.

5. The apparatus of claim 1 , further operations comprising linking triples of the triplestore of the semantic layer to a resource of the data layer, wherein the linking is based on index mapping.

6. The apparatus according to any one of the preceding claims, further operations comprising linking triples of the triplestore of the semantic layer to a resource of the data layer, wherein the linking is based on index mapping using a uniform resource identifier (URI).

7. The apparatus according to any one of the preceding claims, wherein the semantic layer and the data layer are located on the apparatus.

8. The apparatus according to any one of the preceding claims, wherein the semantic layer is located on the apparatus and the data layer is located on another apparatus.

9. A method comprising:

receiving, by a semantic layer from an external entity, a resource access request, wherein the semantic layer is an interface before the data layer;

responsive to receiving the resource access request, determining, by the semantic layer, that the resource access operation is allowed based on an access control policy of a triplestore; and

sending, by the semantic layer, the resource access request to a data layer for a resource of the resource access request to be obtained.

10. The method of claim 9, further operations comprising:

receiving, by the semantic layer, a result of the resource access request from the data layer; and

sending, by the semantic layer to the external entity, a response that comprises result of the resource access request from the data layer.

1 1. The method according to any one of claims 9 or 10, further operations comprising linking triples of the triplestore of the semantic layer to a resource of the data layer.

12. The method according to any one of claims 9, 10, or 1 1, further operations comprising linking triples of the triplestore of the semantic layer to a resource of the data layer, wherein the linking is based on index mapping.

13. The method according to any one of claims 9, 10, 1 1, or 12, further operations comprising linking triples of the triplestore of the semantic layer to a resource of the data layer, wherein the linking is based on index mapping using a uniform resource identifier (URI).

14. The method according to any one of claims 9, 10, 1 1 , 12, or 13, wherein the semantic layer is located on a first apparatus and the data layer is located on a second apparatus.

15. A computer program product comprising a computer readable medium, having stored thereon a computer program comprising program instructions, the computer program being loadable into a data-processing unit and adapted to cause the data-processing unit to execute method steps according to any of claims 9 to 14 when the computer program is run by the data- processing unit.

Description:
INTEGRATING DATA ENTITY AND SEMANTIC ENTITY

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No.

62/278,221, filed on January 13, 2016, entitled "Integrating Data Entity And Semantic Entity," the contents of which are hereby incorporated by reference herein.

BACKGROUND

Semantics Web

[0001] The Semantic Web is an extension of the Web through standards by the World Wide Web Consortium (W3C). The standards promote common data formats and exchange protocols on the Web, most fundamentally the Resource Description Framework (RDF).

[0002] The Semantic Web involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). These technologies are combined to provide descriptions that supplement or replace the content of Web documents via web of linked data. Thus, content may manifest itself as descriptive data stored in Web-accessible databases, or as markup within documents, particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately.

The Semantic Web Stack

[0003] The Semantic Web Stack illustrates the architecture of the Semantic Web specified by W3C, as shown in FIG. 1. The functions and relationships of the components can be summarized as follows. XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within. XML is not at present a necessary component of Semantic Web technologies in most cases, as alternative syntaxes exists, such as Turtle. Turtle is a de facto standard, but has not been through a formal standardization process. XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.

[0004] RDF is a simple language for expressing data models, which refer to objects ("web resources") and their relationships in the form of subject-predicate-object, i.e. S-P-0 triple or RDF triple. An RDF-based model can be represented in a variety of syntaxes, e.g.,

RDF/XML, N3, Turtle, and RDFa. RDF is a fundamental standard of the Semantic Web. RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.

[0005] OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"), equality, richer type of properties, characteristics of properties (e.g. symmetry), and enumerated classes.

[0006] SPARQL is a protocol and query language for semantic web data sources, to query and manipulate RDF graph content (i.e. RDF triples) on the Web or in an RDF store (i.e. a Semantic Graph Store). SPARQL 1.1 Query, a query language for RDF graph, can be used to express queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions. SPARQL also supports aggregation, subqueries, negation, creating values by expressions, extensible value testing, and constraining queries by source RDF graph. The results of SPARQL queries can be result sets or RDF graphs. SPARQL 1.1 Update is an update language for RDF graphs. It uses a syntax derived from the SPARQL Query Language for RDF. Update operations are performed on a collection of graphs in a Semantic Graph Store. Operations are provided to update, create, and remove RDF graphs in a Semantic Graph Store. RIF is the W3C Rule Interchange Format. It's an XML language for expressing Web rules that computers can execute. RIF provides multiple versions, called dialects. It includes a RIF Basic Logic Dialect (RIF-BLD) and RIF Production Rules Dialect (RIF PRD).

Semantic Search and Semantic Query

[0007] Relational Databases contain relationships between data in an implicit manner only. For example the relationships between customers and products (stored in two content- tables and connected with an additional link-table) only come into existence in a query statement (i.e. SQL is used in the case of relational databases) written by a developer. Writing the query demands the exact knowledge of the database schema. Many Relational Databases are modeled as in a Hierarchical Database in which the data is organized into a tree-like structure. The data is stored as records which are connected to one another through links. A record in the hierarchical database model corresponds to a row (or tuple) in the relational database model and an entity type corresponds to a table (or relation - parent & child). A search or query of a record may be conducted by SQL or Non-SQL search engines.

[0008] As shown in FIG. 2, a hierarchical database model mandates that each child record has only one parent, whereas each parent record can have one or more child records. In order to retrieve data from a hierarchical database the whole tree needs to be traversed starting from the root node. This structure is simple but inflexible because the relationship is confined to a one-to-many relationship.

[0009] Linked-Data contain relationships between data in an explicit manner. In the above mentioned example described in Relational Database, no query code needs to be written. The correct product for each customer can be fetched automatically. Whereas this simple example is trivial, the real power of linked-data comes into play when a network of information is created (customers with their geo-spatial information like city, state and country; products with their categories within sub and super-categories). Now the system can automatically answer more complex queries and analytics that look for the connection of a particular location with a product category. The development effort for this query is omitted. Executing a Semantic Query is conducted by walking the network of information and finding matches (also called Data Graph Traversal).

[0010] Semantic Search seeks to improve search accuracy by understanding searcher intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Semantic search systems consider various points including context of search, location, intent, and variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. Major web search engines like Google and Bing incorporate some elements of Semantic Search. Semantic Search uses semantics, or the science of meaning in language, to produce highly relevant search results. In most cases, the goal is to deliver the information queried by a user rather than have a user sort through a list of loosely related keyword results. For example, semantics may be used to enhance a record search or query in a hierarchical Relational Database.

[0011] Semantic Query allows for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer more Fuzzy and wide open questions through pattern matching and digital reasoning.

[0012] Semantic queries work on named graphs, linked-data or triples. This enables the query to process the actual relationships between information and infer the answers from the network of data. This is in contrast to Semantic Search, which uses semantics (the science of meaning) in unstructured text to produce a better search result (i.e. Natural language processing).

[0013] From a technical point of view semantic queries are precise relational-type operations much like a database query. They work on structured data and therefore have the possibility to utilize comprehensive features like operators (e.g. > < and =), namespaces, partem matching, subclassing, transitive relations, semantic rules and contextual full text search. The semantic web technology stack of the W3C is offering SPARQL to formulate semantic queries in a syntax similar to SQL. Semantic queries are used in triplestores, graph databases, semantic wikis, natural language, and artificial intelligence systems.

[0014] Another important aspect of semantic queries is that the type of the relationship can be used to incorporate intelligence into the system. The relationship between a customer and a product has a fundamentally different nature then the relationship between a neighborhood and its city. The latter enables the semantic query engine to infer that a customer living in Manhattan is also living in New York City whereas other relationships might have more complicated patterns and "contextual analytics". This process is called inference or reasoning and is the ability of the software to derive new information based on given facts.

Semantic Internet of Things (IoT)

[0015] The rapid increase in number of network-connected devices and sensors deployed in our world is changing the information communication networks, and services or applications in various domains. It is predicted that within the next decade billions of devices will generate large volumes of real world data for many applications and services in a variety of areas such as smart grids, smart homes, healthcare, automotive, transport, logistics and environmental monitoring. Internet of Things (IoT) enables integration of real world data and services into the current information networking.

[0016] Integration of data from various physical, cyber, and social resources enables developing applications and services that can incorporate situation and context-awareness into the decision making mechanisms and can create smarter applications and enhanced services. In dealing with large volumes of distributed and heterogeneous IoT data, issues related to interoperability, automation, and data analytics will require common description and data representation frameworks and machine readable and machine-interpretable data descriptions. Applying semantic technologies to the IoT promotes interoperability among various resources and data providers and consumers, and facilitates effective data access and integration, resource discovery, semantic reasoning, and knowledge extraction. Semantic annotations can be applied to various resources in the IoT. The suite of technologies developed in the Semantic Web, such as ontologies, semantic annotation, Linked Data and semantic Web services, can be used as principal solutions for realizing the semantic IoT. [0017] However, the following challenges from IoT requires special design

considerations to be taken into account to effectively apply the semantic technologies on the real world data.

[0018] Dynamicity and Complexity: real world data is more transient, and mostly time and location dependent. The pervasiveness and volatility of the underlying environments require continuous updates and monitoring of the descriptions.

[0019] Scalability: the IoT data refers to different phenomena in the real world; so the semantic description and annotation with data need to be associated with domain knowledge of real world resources and entities so that to be scalable to different and dynamic real word situations.

[0020] Distributed Data Storage/Query: with large volumes of data and semantic descriptions, efficiency of storage and data handling mechanisms become a key challenge;

especially considering the scale and dynamicity involved.

[0021] Quality, Trust, and Reliability of Data: the IoT data is provided by different sensory devices Inaccuracy and varying qualities in the IoT data are unavoidable.

[0022] Security and privacy: IoT data is often personal. The mechanisms to provide and guarantee the security and privacy of data are crucial issues in IoT.

[0023] Interpretation and Perception of Data: semantic descriptions and background knowledge provided in machine-readable and interpretable formats, will support transforming enormous amount of raw observations created by machine and human sensors into higher-level abstractions that are meaningful for human or automated decision making processes. However, machine perception in IoT adds additional challenges to the problems that conventional AI methods have been trying to solve in the past, e.g. integration and fusion of data from different sources, describing the objects and events, data aggregation and fusion rules, defining thresholds, real-time processing of the data streams in large scale, and quality and dynamicity issues.

oneM2M Architecture

[0024] The oneM2M standard (oneM2M-TS-0001 oneM2M Functional Architecture-

V-l .6.1) under development defines a service layer called common service entity (CSE), as illustrated in FIG. 3. The Mca reference point interfaces with an application entity (AE). The

Mcc reference point interfaces with another CSE within the same service provider domain and the Mcc' reference point interfaces with another CSE in a different service provider domain. The

Men reference point interfaces with the underlying network service entity (NSE). An NSE provides underlying network services to the CSEs, such as device management, location services and device triggering. CSE contains multiple logical functions called "Common Service Functions (CSFs)", such as "Discovery" or "Data Management & Repository." FIG. 4 illustrates example CSFs for oneM2M.

[0025] oneM2M architecture enables the application service node (ASN), application dedicated node (ADN), the middle node (MN), and the infrastructure node (IN). The ASN is a node that contains one CSE and contains at least one AE. An example of physical mapping is an ASN residing in an M2M Device. The ADN is a node that contains at least one AE and does not contain a CSE. An example of physical mapping is an ADN residing in a constrained M2M Device. An MN is a node that contains one CSE and contains zero or more AEs. An example of physical mapping for an MN is an MN residing in an M2M Gateway. The IN is a node that contains one CSE and contains zero or more AEs. An example of physical mapping for an IN is the IN residing in an M2M Service Infrastructure.

[0026] There also may be a non-oneM2M node, which is a node that does not contain oneM2M Entities (neither AEs nor CSEs). Such nodes represent devices attached to the oneM2M system for interworking purposes, including management. The possible configurations of inter-connecting the various entities supported within the oneM2M system are illustrated in FIG. 5.

Semantic Description in oneM2M Architecture

[0027] FIG. 6 illustrates an exemplary structure of <semanticDescriptor> resource in a resource tree. The <semanticDescriptor> resource is used to store a semantic description pertaining to a resource and potentially subresources. Such a description may be provided according to ontologies. The semantic information is used by the semantic functionalities of the oneM2M system and is also available to applications or CSEs.

[0028] The <semanticDescriptor> resource shall contain the attributes specified in Table 1.

Table 1: Attributes of <semanticDescriptor> Resource

Access Control Policy in oneM2M [0029] As shown in FIG. 7, the <accessControlPolicy> resource is comprised of privileges and selfPrivileges attributes which represent a set of access control rules defining which entities (defined by accessControlOriginators) have the privilege to perform certain operations (defined by accessContolOperations) within specified contexts (defined by accessControlContexts) and are used by the CSEs in making Access Decision to specific resources.

[0030] In a privilege, each access control rule defines which AE/CSE is allowed for which operation. So for sets of access control rules an operation is permitted if it is permitted by one or more access control rules in the set. For a resource that is not of <accessControlPolicy> resource type, the common attribute accessControlPolicylDs for such resources contains a list of identifiers which link that resource to <accessControlPolicy> resources. The CSE Access Decision for such a resource shall follow the evaluation of the set of access control rules expressed by the privileges attributes defined in the <accessControlPolicy> resources.

[0031] The selfPrivileges attribute shall represent the set of access control rules for the <accessControlPolicy> resource itself. The CSE Access Decision for <accessControlPolicy> resource shall follow the evaluation of the set of access control rules expressed by the selfPrivileges attributes defined in the <accessControlPolicy> resource itself.

[0032] The <accessControlPolicy> resource shall contain the attributes specified in Table 2.

Table 2: Attributes of <accessControlPolicy> Resource

[0033] The set of Access Control Rules represented in privileges and selfPrivileges attributes are comprised of 3-tuples described below. The accessControlOriginators is a mandatory parameter in an access-control-rule-tuple. It represents the set of Originators that shall be allowed to use this access control rule. The set of Originators is described as a list of parameters, where the types of the parameter can vary within the list. Table 3 describes the supported types of parameters in accessControlOriginators.

Table 3: Types of Parameters in accessControlOriginators

[0034] When the originatorlD is the resource-ID of a <group> resource which contains <AE> or <remoteCSE> as member, the Hosting CSE of the resource shall check if the originator of the request matches one of the members in the memberlDs attribute of the <group> resource (e.g. by retrieving the <group> resource). If the <group> resource cannot be retrieved or doesn't exist, the request shall be rejected.

[0035] The accessControlContexts is an optional parameter in an access-control-rule- tuple that contains a list, where each element of the list, when present, represents a context that is permitted to use this access control rule. Each request context is described by a set of parameters, where the types of the parameters can vary within the set. Table 4 describes the supported types of parameters in accessControlContexts. The following Originator accessControlContexts shall be considered for access control policy (ACP) check by the CSE.

Table 4: Types of Parameters in accessControlContexts

[0036] The accessControlOperations is a mandatory parameter in an access-control- rule-tuple that represents the set of operations that are authorized using this access control rule. Table 5 describes the supported set of operations that are authorized by accessControlOperations.

[0037] The following accessControlOperations shall be considered for access control policy check by the CSE.

Table 5: Types of parameters in accessControlOperations Nsiiiie Description

RETRIEVE Privilege to retrieve the content of an addressed resource

CREATE Privilege to create a child resource

UPDATE Privilege to update the content of an addressed resource

DELETE Privilege to delete an addressed resource

DISCOVER Privilege to discover the resource

NOTIFY Privilege to receive a notification

SUMMARY

[0038] Disclosed herein are methods, systems, and devices that may be used for integration of data entity and semantic entity in semantic IoT systems. Discussed in more detail are: 1) A hierarchical layered architecture, which may include structures, such as a structure of hierarchical resource data layer over semantic layer with distributed semantic leaves or a structure of semantic layer with graph store over hierarchical resource data layer; and 2) a parallel architecture, which may include data entity and semantic entity exchanging messages to support operations, such as semantic operations that may be based on data entity and data, and resource management that may be based on semantic mash-up.

[0039] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not constrained to limitations that solve any or all disadvantages noted in any part of this disclosure

BRIEF DESCRIPTION OF THE DRAWINGS

[0040] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings wherein:

[0041] FIG.

[0042] FIG.

[0043] FIG.

[0044] FIG.

[0045] FIG.

Architecture;

[0046] FIG.

Resource Tree; [0047] FIG. 7 illustrates an exemplary Structure of <accessControlPolicy> Resource;

[0048] FIG. 8 illustrates an exemplary Architecture for Semantic Descriptors in a Centralized Semantic Graph Store;

[0049] FIG. 9 illustrates an exemplary Architecture for Semantic Descriptors in Multiple Semantic Graph Stores;

[0050] FIG. 10 illustrates an exemplary Architecture of Semantic Descriptors

Distributed in Hierarchical Resource Trees;

[0051] FIG. 11 illustrates an exemplary Architecture for Semantic Descriptors in a Semantic Graph Store and Hierarchical Resource Trees;

[0052] FIG. 12 illustrates an exemplary Semantic Descriptors in Centralized Semantic Graph Store;

[0053] FIG. 13 illustrates an exemplary Semantic Descriptors Distributed in a

Hierarchical Resource Tree;

[0054] FIG. 14 illustrates an exemplary Heterogeneous Logic Tree in oneM2M;

[0055] FIG. 15 illustrates an exemplary Hierarchical Layered Architecture;

[0056] FIG. 16 illustrates an exemplary Hierarchical Layered Architecture;

[0057] FIG. 21 illustrates an exemplary Parallel Architecture;

[0058] FIG. 22 illustrates an exemplary Semantic Operations based on Data Entity;

[0059] FIG. 23 illustrates an exemplary Data Resource Management based on Semantic Mash-Up;

[0060] FIG. 24 illustrates an exemplary Hierarchical Layered Architecture (Same

CSE);

[0061] FIG. 25 illustrates an exemplary Hierarchical Layered Architecture (Different

CSE);

[0062] FIG. 26 illustrates an exemplary display (e.g., graphical user interface) that may be generated based on the methods and systems discussed herein;

[0063] FIG. 27A is a system diagram of an example machine-to-machine (M2M) or Internet of Things (IoT) communication system in which the disclosed subject matter may be implemented;

[0064] FIG. 27B is a system diagram of an example architecture that may be used within the M2M / IoT communications system illustrated in FIG. 27 A;

[0065] FIG. 27C is a system diagram of an example M2M / IoT terminal or gateway device that may be used within the communications system illustrated in FIG. 27A; and [0066] FIG. 27D is a block diagram of an example computing system in which aspects of the communication system of FIG. 27 A may be embodied.

DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS

[0067] Disclosed herein are methods, systems, and devices that may be used for integration of data entity and semantic entity in semantic IoT systems. Discussed in more detail are: 1) Hierarchical layered architecture, which includes structures, such as, a structure of hierarchical resource data layer over semantic layer with distributed semantic leaves and a structure of semantic layer with graph store over hierarchical resource data layer; and 2) Parallel architecture, which includes data entity and semantic entity exchanging messages to support operations, such as semantic operations that may be based on data entity and data and resource management that may be based on semantic mash-up.

[0068] FIG. 8, FIG. 9, FIG. 10, and FIG. 11 are exemplary functional architectures that may be used for oneM2M or the like. FIG. 8 illustrates an exemplary architecture for semantic descriptors in a centralized semantic graph store. FIG. 9 illustrates an exemplary architecture for semantic descriptors in multiple semantic graph stores. FIG. 10 illustrates an exemplary architecture of semantic descriptors distributed in hierarchical resource trees. FIG. 11 illustrates an exemplary architecture of semantic descriptors in a semantic graph store and hierarchical resource trees. FIG. 8, FIG. 9, FIG. 10, and FIG. 11 are functional architectures that may be used in oneM2M in order to enable semantics in M2M/IoT systems. Although "enabling semantics" in general means "to add semantics to (IoT) data," it may refer to different levels of enablement. For example, for a given set of data, e.g. a list of temperature readings, stored in a <container> resource (which may be physically stored in a relational database on a CSE), it may already be associated with certain semantic information, e.g. "temperature-readings-rooml," which may already be utilized for resource discovery. For example, when a temperature control application retrieves the temperature readings from room 1, it may use the semantic info "temperature- readings-rooml" to discover and retrieve what it needs.

[0069] However, since a conventional goal of enabling semantics is to support system interoperability and machine automatic operations, enabling semantics in oneM2M is to adapt available technologies/standards for enabling semantics (e.g., to use ontology modeling

(RDFS/OWL) to define common terminologies/concepts, to use RDF triple as uniform semantic representation format, to use SPARQL query language to define semantic query, etc.). Therefore, it may be seen that from a system development perspective, there may be individual subsystems to deal with data related functionalities and semantics related functionalities. For example, data- related subsystem may include functions such as data storage, data lifetime management, data analytics, etc., which may be independent from semantics subsystem, and vice versa for a semantics subsystem.

[0070] When considering conventional systems, questions arise with regard to how the possible architectures may be adopted when building a semantic-enabled M2M/IoT system and how data-related subsystem and semantic-related subsystem may interact and integrate with efficiency and flexibility. FIG. 12 and FIG. 13 are functional architectures discussed below to give context for illustrating potential issues.

[0071] As shown in FIG. 12, the semantic descriptions in the form of RDF triples (e.g., semantic descriptors) are deposited at a centralized RDF triple database (e.g., semantic graph store 201). For example, eHealthcare Application 202 associated with a doctor, heart monitor application 204 associated with a Patient A, and blood pressure application 206 associated with a Patient B may store their semantic descriptors into semantic graph store 201 (e.g., centralized RDF triplestore), which is centralized. Then eHealthcare Application 202 may conduct a semantic query over patients' semantic descriptors if the patients grant the permission to eHealthcare Application 202 (or the like application). Applications of the patients (e.g., heart monitor application 204 associated with Patient A and blood pressure application 206 associated with Patient B) may also conduct a semantic query over their own semantic descriptors and some semantic descriptors provided by or associated with eHealthcare Application 202, if the doctor grants the permission to applications associated with his patients. But the applications of the patients cannot conduct a semantic query over each other's semantic descriptors unless permission is granted. Furthermore, eHealthcare Application 202 may update or delete his semantic descriptors and maybe some of his patients' semantic descriptors if the permissions are granted by his patients. And similarly a patient may update or delete his or her own semantic descriptors and maybe some of the semantic descriptors associated with the doctor, if permission is granted by the doctor.

[0072] In the centralized semantic graph store architecture, the data subsystem may be more like data ingestion infrastructure, in the sense that once the data has been re-represented as uniform RDF format and stored in semantic graph store, the data is now exposed to the external world in terms of the format compliant to semantic subsystem (e.g., RDF triples). Advanced functionalities are now expected to be conducted over a semantic subsystem (e.g., semantic query, semantic mashup, semantic reasoning, etc.). In other words, there can be considered a single-direction exchange from data related subsystem to semantic related subsystem. However, as mentioned herein: 1) semantic-related functionalities (e.g., semantic mash-up) may produce new RDF triples, which may further result in generating new data in the data sub system (e.g., a virtual "weather report" device may be produced by semantic mashup operation in the semantic subsystem); and 2) there may also be other non-semantics-related functionalities in the data subsystem, such as data storage, data lifecycle management, data analytics, etc. Therefore, the aforementioned non-semantics-related-functionalities may pose a potential design issue associated with integrating the data subsystem and semantics subsystem together, without overlooking their own functionalities and potential dynamic interactions.

[0073] As shown in FIG. 13, both data and semantic descriptions either in the form of RDF triples (e.g., semantic descriptors) or not are deposited at a relationship database, e.g., a hierarchical resource tree 208. For example, eHealthcare Application 202, heart monitor application 204 associated with Patient A, and blood pressure application 206 associated with Patient B store their data and semantic descriptors into a relationship database or a hierarchical resource tree 208. Semantic descriptor 211 describes semantic info related to the related eHealthcare Application 202, semantic descriptor 212 is for a data container storing data related to this eHealthcare Application 202, and semantic descriptor 213 is for a specific data instance that is stored in the previously-mentioned data container. Later, the doctor may conduct semantic search using semantic descriptions or semantic query over his patients' semantic descriptors in the resource tree, if the patients grant the permission to the doctor. And the associated applications of the patients may conduct semantic search using semantic descriptions or semantic query over their own semantic descriptors and some of the doctor's semantic descriptors, if the doctor grants the permission to his patients to the resource tree 208. The applications of the patients cannot conduct semantic query over each other's semantic descriptors if no permission is granted. Furthermore, the doctor may update or delete his semantic descriptors and possibly some of his patients' semantic descriptors, if the permissions are granted by his patients. A patient may update or delete his or her own semantic descriptors and maybe some of the doctor's, if permission is granted by the doctor.

[0074] Unlike the centralized triplestore architecture, the distributed architecture makes the data subsystem and semantic subsystem highly coupled, in the way that the data in the data subsystem are directly associated with semantic descriptors, which are the elements to be used in the semantic subsystem. As a result, it may be seen that many of the semantic subsystem functionalities such as semantic query or semantic reasoning are difficult to implement due to the lack of semantic triplestore in the semantic subsystem.

[0075] Overall, the logic resource tree specified in oneM2M is a heterogeneous tree containing two different structures of databases, i.e. hierarchical relational database, which belongs to the data subsystem, and linked data graph stores (e.g., RDF triples 209 and Ontology 210), which belongs to the semantic subsystem, as shown in FIG. 14.

[0076] Based on the discussion of the use cases associated with FIG. 12 and FIG. 13, for an intelligent IoT system, there may be different design principles or architectures with semantics enabled. Each design may address issues such as how data-related subsystem and semantic-related subsystem interact and are integrated, which has not been fully addressed by conventional approaches. Discussed below in more detail are methods, systems, and devices that may be used for integration of data entity and semantic entity in semantic IoT systems.

[0077] For further context for the subject matter discussed herein, a semantic description is in RDF triple like format (e.g., Subject-Predicate-Object (S-P-O) relationship description). SPARQL, as used for RDF triples, is an example. The solutions may be generalized for other types of semantics expressions and semantics query languages for linked data. Data is a general term, which may be a record in a Resource Tree, such as data samples or context info (e.g., metadata).

[0078] Data entity and semantic entity as discussed herein (e.g., FIG. 15, FIG. 16, or FIG. 21) may be logical entities. From an implementation perspective, the data entity and semantic entity may both be realized on the same physical CSE. The data entity disclosed herein refers to the resource tree-related software modules or functionalities that store, represent, or operate data in a traditional way, such as traditional CRUD (CREATE, RETRIEVE, UPDATE, and DELETE) resource manipulations, normal resource discovery, or other operations such as those defined in oneM2M standard. In an example with regard to data entity, it may just store data in its raw format (e.g., a single numerical temperature value) and represent those data through normal resources. By comparison, the semantics entity refers to an additional software module or functionalities that store, represent and operate data "in a more advanced way", i.e., in a semantic way. In an example with regard to the semantic entity, the data may be represented in a RDF format (not just in a raw format), stored in a semantic repository or triplestore. Semantic- related functionalities that may be supported in the semantic entity may include semantic query, semantic mashup, or semantic reasoning, among other things. Note that those semantic-related functionalities are not usually supported or implemented in the data entity.

[0079] Access policy control is mainly used for explaining the mechanisms in different architecture, but the mechanisms proposed herein are not limited to access control only. Again, access control policy is an example to show the different architecture, but the proposed architectures and design principles in this disclosure are not limited to the access control case.

An element or node in a semantic triple (e.g., the S, P, or O of a triple S-P-O) may be addressed by a unique identifier, e.g. Internationalized Resource Identifier (IRI) or Uniform Resource Identifier (URI), in a Semantic Triple statement. URI is used herein, but IRI or other identifier such as URL is also applicable in the mechanisms proposed. An element or node in a RDF triple S-P-0 may be represented by URI, blank node label, or a Unicode string literal.

[0080] FIG. 15 and FIG. 16 illustrate exemplary hierarchical layered architectures. As shown in FIG. 15 and FIG. 16, data resource and semantic triples may be integrated in a hierarchical layered architecture with the data layer containing the data resources and semantic layer containing semantic triples. Here, hierarchical refers to the upper layer being in a dominate position and acting as interface exposed to other entities. The upper layer (e.g., Data Layer 214 in FIG. 15, Semantic Layer 215 in FIG. 16) controls and manages the interactions between these two layers, as well as the access control, index mapping, query translation etc. The lower layer (e.g., Semantic Layer 215 in FIG. 15, Data Layer 214 in FIG. 16) supports the upper layer with semantic graphs as shown in FIG. 15 or raw data as shown in FIG. 16. The layers may reside on different Service Entities, e.g., different CSEs in oneM2M architecture. But integration on the same Service Entity, e.g., on a CSE in oneM2M architecture, may be more efficient. The interface between two layers may be external if the layers reside on different Service Entities, e.g., Mcc/Mcc' in oneM2M architecture. The interface between two layers may be internal if integrated on the same Service Entity, e.g., internal interface between CSFs.

[0081] With reference to FIG. 15, as shown, data layer 214 (i.e., the data entity) is on top (in the upper layer) and therefore the external entities interact with data layer 214, which is acting as an interface. In other words, data layer 214 is in a dominate position. When a semantic feature is needed (e.g., semantic query), an external entity (e.g., CSE 256 of FIG. 24A) may still need to first interact with data layer 214 by submitting a request (e.g., a request carrying a semantic query). If CSE 256 or other service layer node implementing data layer 214 and semantic layer 215 of FIG. 15, in which data layer 214 is acting as an interface, other extremal entities (e.g., a node other than CSE 256) have messages exchanged with CSE 256 through data layer 214 as an interface.

[0082] Data layer 214 of FIG. 15 may further work with semantic layer 215 (i.e., the semantic entity) on the lower layer, in which semantic query will be processed by the triplestore which belongs to semantic layer 215. In some instances, further processing may be needed. In an example, if the semantic query statement submitted by the external entity is not in the correct format, data layer 214 may first do a query translation before sending to the semantic layer 214.

Here, data layer 214 may do some pre-processing (e.g., format translation) in order for the query to be in a desired format for the semantic layer 215, the pre-processing is not necessarily a semantic feature. The processing flow of a request message or other message in of FIG. 15 is through data layer 214 first (when arriving from an external entity) and then, subsequently, semantic layer 215.

[0083] With continued reference to FIG. 15, when data layer 214 is acting as an interface, the typical interface may be a RESTful resource tree representation, and through CRUD operations on those resources, it may trigger various actions of data layer 214, which is resource-oriented). In general, when data layer 214 is acting as an interface, it usually follows the resource-oriented approach (similar to how oneM2M service layer works).

[0084] FIG. 17 illustrates an exemplary method in view of FIG. 15. At step 261, an external entity (e.g., eHealthcare App 202 on another device) sends a resource access request to data layer 214, which may be located on CSE 256. At step 262, the data layer 214 directly evaluates access control policy stored in data layer 214, in order to determine whether the resource access operation is allowed. At step 263, if the access is allowed, the resource will be accessed. At step 264, data layer 214 sends back the response message regarding to this resource access request to eHealthcare App 202.

[0085] FIG. 18 illustrates another exemplary method in view of FIG. 15. At step 271, an external entity (e.g., eHealthcare App 202) sends a request to data layer 214, in which a semantic query request is included. At step 272, after receiving the request of step 271, data layer 214 works with semantic layer 215 (on the lower layer). In particular, some pre-processing may be needed at data layer 214 ( e.g., if the query statement submitted by the user is not in a desired format/form for the semantic entity, data layer 214 may first need to do a query statement translation or query statement transformation before sending to semantic layer 215). At step 273, data layer 214 sends the semantic query to semantic layer 215. At step 274, semantic layer 215 processes the semantic query and produces result of the semantic query and returns the result to data layer 214. At step 275, data layer 214 sends back a response message to eHealthcare App 202 regarding this semantic query request, in which the semantic query result is included.

[0086] With reference to FIG. 16, as shown, semantic layer 215 (i.e., a semantic entity) is on the top (in the upper layer) and therefore the external entities interact with semantic layer

215, which is acting as an interface. In this scenario, the ACP related information in the triplestore may be duplicated (e.g., ACP related information is directly available in the semantic entity), so access control may be directly handled by semantic layer 215. In other words, the semantic layer is in a dominate position. When some of the information that was originally in data layer 214 becomes also available in the triplestore, various operations may also be realized by triplestore. For example, in the conventional resource discovery, it requires traversal of the resource tree in the data layer 214, but as disclosed herein the semantic resource discovery may be directly done in the triplestore in the semantic layer 215. In other words, the triplestore may directly return the discovered resource list if the triples stored in the triplestore are linked with involved resources in the data layer 214 through index mapping (a way of doing index mapping is to use resource URIs in the triples). The processing flow of a request message or other message in of FIG. 16 is through the semantic layer 215 first (when arriving from an external entity) and then, subsequently, data layer 214.

[0087] With continued reference to FIG. 16, when semantic layer 215 is acting as an interface, an external entity may directly interact with semantic layer 215, which may have other interaction approaches, not just through resource-oriented way as mentioned with regard to FIG. 15. For example, a Triple Store may just expose itself as a "web service" such that external entities may interact with a Triple Store in service-oriented way. In another example, the Triple Store may publish its access portal information, which may be just an IP address and port number.

[0088] FIG. 19 illustrates an exemplary method in view of FIG. 16. At step 281, an external entity (e.g., eHealthcare App 202) sends a resource access request to semantic layer 215. At step 282, semantic layer 215 directly evaluates access control policy stored in the triplestore, in order to determine whether the resource access request is allowed. At step 283, if the access is allowed, semantic layer 215 forwards the resource access request to data layer 214 in the lower layer, where the resource will be accessed. At step 284, data layer 214 sends back the resource access result to semantic layer 215. At step 285, semantic layer 215 then sends back the response message to eHealthcare App 202 regarding the resource access request.

[0089] FIG. 20 illustrates another exemplary method in view of FIG. 16. At step 291, an external entity (e.g., eHealthcare App 202) sends a semantic resource discovery request to semantic layer 215. At step 292, semantic layer 215 directly evaluates the semantic resource discovery request by using the information stored in the triplestore, in order to find a match. At step 293, if the match is found, semantic layer 215 further decides which resources are related to this match. It should be understood that the triples stored in the triplestore may be linked with involved resources in data layer 214 through index mapping. For example, an index mapping may use resource URIs in the triples. Corresponding URIs of discovered resources may be the result of the semantic resource discovery operation. At step 294, semantic layer 215 then sends back the response message to eHealthcare App 202 regarding the semantic resource discovery request, in which the result of the semantic resource discovery request is included. [0090] FIG. 21 illustrates an exemplary parallel architecture. As shown in FIG. 21, a parallel architecture may have a data entity 231 and a semantic entity 233. Data entity 231 may include data resources and a local temporary semantic graph store 235 used for semantic triples under the semantic descriptors distributed in hierarchical resource trees. Data entity 231 may also include other data analytical functions, such as data analytics, data annotation 236, etc. If data entity 231 does not have the semantic capabilities, then it may publish data to semantic entity 233 via data annotation 236 function. Data annotation 236 to the graph store 238 in semantic entity 233 is realized by local RDBMS 237 and other semantic mapping and reasoning functions which are not detailed in FIG. 21. Access control policy is used as an example to illustrate the different architectures, but the proposed architectures and design principles in this disclosure are not limited to the access control policy case.

[0091] With continued reference to FIG. 21, semantic entity 233 contains a central graph store 238 and semantic functions such as semantic reasoning 239, semantic annotation 230, semantic mash-up 229, etc. Data entity 231 and semantic entity 233 are somewhat integrated via the interface between them, e.g., the Mcc/Mcc' interface in oneM2M architecture. Data entity 231 and semantic entity 233 may reside on different service entities (e.g., different CSEs in oneM2M architecture). Also, data entity 231 and semantic entity 233 may each have its own control and management of access control, mapping, updates etc. The RESTful operations such as CREATE, RETRIEVE, UPDATE, and DELETE may be conducted via the interface between these two entities to support functions, in Data Entity 231 or Semantic Entity 233, such as semantic annotation 230, data annotation 236, semantic mash-up 229, semantic publication 232, etc.

[0092] It is understood herein that the entities performing the steps illustrated herein, such as FIG. 22 - FIG. 23, may be logical entities. The steps may be stored in a memory of, and executing on a processor of, a device, server, or computer system such as those illustrated in FIG. 27C or FIG. 27D. In an example, with further detail below with regard to the interaction of M2M devices, data entity 231 of FIG. 22 may reside on M2M terminal device 18 of FIG. 27 A, while semantic entity 233 of FIG. 22 may reside on M2M gateway device 14 of FIG. 27 A.

Semantic entity 233 and data entity 231 may also reside on different M2M gateway devices 14 of FIG. 27 A. Other configurations are contemplated.

[0093] FIG. 22 and FIG. 23 are related to FIG. 21. For example, FIG. 22 illustrates what the arrows in FIG. 21 mean (e.g., arrow between 232 and 238). FIG. 22 illustrates how a feature (e.g., data analytics) of the data entity 231 (i.e., data layer) may affect semantic entity 233 (i.e., semantic layer). For example, data analytics features in data entity 231 may need to add more new triples to the triplestore in semantic entity 233 through semantic publication 232.

[0094] FIG. 22 illustrates an exemplary semantic operations based on data entity 231. FIG. 22 illustrates semantic operations between data entity 231 and semantic entity 233. In this procedure, data entity 231 stores/hosts data, while semantic entity 233 stores/hosts semantic host entity. Data entity 231 and semantic entity 233 have peer-to-peer relationship. This procedure allows data entity 231 to conduct semantic operations on the semantic entity 233 based on, but not limited to, mining and analyzing the stored data at data entity 231. The semantic operations that data entity 231 can perform include: 1) add or create new semantic triples to the semantic entity 233; 2) update existing semantic triples stored at the semantic entity 233; or 3) remove existing semantic triples maintained at semantic entity 233. In addition, semantic entity 233 may actively send a request to data entity 231 to solicit semantic operations from it.

[0095] With reference to FIG. 22, at step 241 there may be a semantic operation solicitation request. The request of step 241 may be to an address of target data (or just Originator ID). By addressing to the target data, semantic entity 233 actually asks for data entity 231 to perform one-time or periodic semantic operations later once certain analysis on the target data finds or deduces new semantic triples. The address of the request of step 241 may be the ID of semantic entity 233. The operation of the request of step 241 may be a CREATE (e.g., to create a virtual resource to trigger operations in step 243 and after). There may be parameters in the request of step 241 that may include RequestID and ContentFormat. Content of the request of step 241 may include address for receiving new semantic triples or time duration within which the data entity 231 is allowed to perform semantic operations, among other things.

[0096] At step 242, there may be a semantic operation solicitation response that may include a receiverlD (e.g., in "To" field), originatorlD (e.g., in "from" field), RequestStatus and RequestID (e.g., in "parameters" field), or content of true or false. At step 243, there may be data analysis and semantic preparation. Data entity 231 may deduce new semantic triples from hosted data via data mining and analytics (e.g., clustering, classification, and association rules) to be added or updated in semantic entity 233. In addition, data entity 231 may determine old semantic triples to be removed. At step 244, there may be a semantic operation request sent. The request of step 244 may include an address of target semantic triple (e.g., in "to" field), originatorlD (e.g., CSE ID in "from" field), operation (e.g., CREATE, UPDATE, DELETE in "operation" field), parameters (e.g., requestID, ContentFormat, FilterCriteria, ResultContent), or content (e.g., semantic triples for CREATE OR UPDATE operation). [0097] With continued reference to FIG. 22, at step 245, semantic entity 233 may process semantic operations. The operation may be a validate semantic operation, as requested in step 244 according to the access control policy rules associated with the target semantic triple. Other examples, include the use of CREATE, UPDATE, or DELETE among others. If the semantic operation in step 244 is CREATE, then create new semantic triples and compose response with CREATE results. If the semantic operation in step 244 is UPDATE, update corresponding semantic triples and compose response with UPDATE results. If the semantic operation in step 244 is DELETE, remove corresponding semantic operation and compose response with DELETE results. At step 246, there may be a semantic operation response. For CREATE operation, the response may include the address of semantic triples being created in step 245.

[0098] When considering FIG. 23, reference may be made to FIG. 21 and the arrows, such as arrow 247 from the semantic mashup box. The method of FIG. 23 illustrates how a feature of semantic entity 233 (e.g., semantic mashup) may affect data entity 231. For example, the result of a semantic mashup operation in semantic entity 233 may trigger to create some new resources in data entity 231.

[0099] FIG. 23 illustrates exemplary data resource management between data entity 231 and semantic entity 233 based on semantic mash-up at semantic entity 233. In this procedure, data entity 231 (the originator) stores data, while semantic entity 233 (the receiver) stores semantic host entity. Semantic entity 233 (the receiver) hosts the semantic-related facility such as triplestore and can support semantic related features. Data entity 231 and semantic entity 233 may have a peer-to-peer relationship. This procedure allows the receiver (e.g., semantic entity 233) to create/update/delete new resource (e.g. a virtual service, a virtual function, a virtual device, or a virtual application) at the originator (e.g., data entity 231). In other words, data entity 231 may first conduct semantic mash-up operation and determine new resources to be created/updated/deleted (step 251); then, it sends a request message to semantic entity 233 to create/update/delete the corresponding new resource at data entity 231 (step 252).

[00100] With reference to FIG. 23, at step 251, there may be semantic mash-up or reasoning processing. Semantic entity 233 may perform semantic mash-up or reasoning based on the stored semantic triples. The result may be a new resource (e.g., a new virtual service, a new virtual function, a new virtual device, a new virtual application) to be created at data entity 231.

Such a new resource may be associated with multiple existing resources maintained at the data entity 231. At step 252, a resource operation request may be sent. The resource operation request of step 252 may include: 1) To: address of target resource (or originatorlD); 2) From: ReceiverlD; 3) Operation (e.g., CREATE/UPDATE/DELETE); 4) Parameters: RequestID, ContentFormat, RequestType, or the like; or 5) content (e.g., new resource ID, URI of existing resources associated with the new resource to be created, other attribute of the new resource.). The RequestType parameter may indicate that request message this step 252 is based on mash-up results in step 251. At step 253, there may be resource operation processing. There may be a validate resource operation as requested in step 252 according to access control policy rules associated with the target resource. Example processing may include CREATE, UPDATE, or DELETE. For example, if the resource operation in step 252 is CREATE, create new resource. If the resource operation in step 252 is UPDATE, update new resource. If the resource operation in step 252 is DELETE, remove the requested resource. At step 254, there may be a resource operation response. The resource operation response of step 254 may include: 1) To: Receiver ID; 2) From: Originator ID; 3) Parameters (e.g., RequestStatus, RequestID); or 4) Content: True/False. With regard to content associated with the CREATE operation, the response may include the address of new resources being created in step 253.

[00101] FIG. 24A, FIG. 24B, and FIG. 25 illustrate exemplary oneM2M architecture that implement the subject matter discussed herein. FIG. 24A illustrates an exemplary hierarchical layered architecture integrated on the same CSE. FIG. 24B illustrates an exemplary hierarchical layered architecture integrated on different CSEs. FIG. 25 illustrates an exemplary parallel architecture.

[00102] FIG. 26 illustrates an exemplary display (e.g., graphical user interface) that may be generated based on the methods and systems discussed herein. Display interface 901 (e.g., touch screen display) may provide text in block 902 associated with integrating data entity and semantic entity in semantic IoT systems, such as the parameters of methods steps such as RequestType, ContentFormat, ReceiverlD, operation, or the like. In another example, progress of any of the steps (e.g., sent messages or success of steps) discussed herein may be displayed in block 902. In addition, graphical output 903 may be displayed on display interface 901.

Graphical output 903 may be the topology of the devices, a graphical output of the progress of any method or systems discussed herein, or the like.

[00103] FIG. 27A is a diagram of an example machine-to machine (M2M), Internet of

Things (IoT), or Web of Things (WoT) communication system 10 in which one or more disclosed concepts associated with integrating data entity and semantic entity in semantic IoT systems may be implemented. Generally, M2M technologies provide building blocks for the

IoT/WoT, and any M2M device, M2M gateway or M2M service platform may be a component of the IoT/WoT as well as an IoT/WoT service layer, etc. [00104] As shown in FIG. 27A, the M2M/ IoTAVoT communication system 10 includes a communication network 12. The communication network 12 may be a fixed network (e.g., Ethernet, Fiber, ISDN, PLC, or the like) or a wireless network (e.g., WLAN, cellular, or the like) or a network of heterogeneous networks. For example, the communication network 12 may comprise of multiple access networks that provides content such as voice, data, video, messaging, broadcast, or the like to multiple users. For example, the communication network 12 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), and the like. Further, the communication network 12 may comprise other networks such as a core network, the Internet, a sensor network, an industrial control network, a personal area network, a fused personal network, a satellite network, a home network, or an enterprise network for example.

[00105] As shown in FIG. 27A, the M2M/ IoTAVoT communication system 10 may include the Infrastructure Domain and the Field Domain. The Infrastructure Domain refers to the network side of the end-to-end M2M deployment, and the Field Domain refers to the area networks, usually behind an M2M gateway. The Field Domain includes M2M gateways 14 and terminal devices 18. It will be appreciated that any number of M2M gateway devices 14 and M2M terminal devices 18 may be included in the M2M/ IoTAVoT communication system 10 as desired. Each of the M2M gateway devices 14 and M2M terminal devices 18 are configured to transmit and receive signals via the communication network 12 or direct radio link. The M2M gateway device 14 allows wireless M2M devices (e.g. cellular and non-cellular) as well as fixed network M2M devices (e.g., PLC) to communicate either through operator networks, such as the communication network 12 or direct radio link. For example, the M2M devices 18 may collect data and send the data, via the communication network 12 or direct radio link, to an M2M application 20 or M2M devices 18. The M2M devices 18 may also receive data from the M2M application 20 or an M2M device 18. Further, data and signals may be sent to and received from the M2M application 20 via an M2M service layer 22, as described below. M2M devices 18 and gateways 14 may communicate via various networks including, cellular, WLAN, WPAN (e.g., Zigbee, 6L0WPAN, Bluetooth), direct radio link, and wireline for example.

[00106] Referring to FIG. 27B, the illustrated M2M service layer 22 in the field domain provides services for the M2M application 20, M2M gateway devices 14, and M2M terminal devices 18, and the communication network 12. It will be understood that the M2M service layer 22 may communicate with any number of M2M applications, M2M gateway devices 14,

M2M terminal devices 18, and communication networks 12 as desired. The M2M service layer 22 may be implemented by one or more servers, computers, or the like. The M2M service layer 22 provides service capabilities that apply to M2M terminal devices 18, M2M gateway devices 14 and M2M applications 20. The functions of the M2M service layer 22 may be implemented in a variety of ways, for example as a web server, in the cellular core network, in the cloud, etc.

[00107] Similar to the illustrated M2M service layer 22, there is the M2M service layer 22' in the Infrastructure Domain. M2M service layer 22' provides services for the M2M application 20' and the underlying communication network 12' in the infrastructure domain. M2M service layer 22' also provides services for the M2M gateway devices 14 and M2M terminal devices 18 in the field domain. It will be understood that the M2M service layer 22' may communicate with any number of M2M applications, M2M gateway devices and M2M terminal devices. The M2M service layer 22' may interact with a service layer by a different service provider. The M2M service layer 22' may be implemented by one or more servers, computers, virtual machines (e.g., cloud/compute/storage farms, etc.) or the like.

[00108] Referring also to FIG. 27B, the M2M service layer 22 and 22' provide a core set of service delivery capabilities that diverse applications and verticals can leverage. These service capabilities enable M2M applications 20 and 20' to interact with devices and perform functions such as data collection, data analysis, device management, security, billing, service/device discovery etc. Essentially, these service capabilities free the applications of the burden of implementing these functionalities, thus simplifying application development and reducing cost and time to market. The service layer 22 and 22' also enables M2M applications 20 and 20' to communicate through various networks 12 and 12' in connection with the services that the service layer 22 and 22' provide.

[00109] In some examples, M2M applications 20 and 20' may include desired applications that use integrating data entity and semantic entity, as discussed herein. The M2M applications 20 and 20' may include applications in various industries such as, without limitation, transportation, health and wellness, connected home, energy management, asset tracking, and security and surveillance. As mentioned above, the M2M service layer, running across the devices, gateways, and other servers of the system, supports functions such as, for example, data collection, device management, security, billing, location tracking/geofencing, device/service discovery, and legacy systems integration, and provides these functions as services to the M2M applications 20 and 20'.

[00110] The integration of data entity and semantic entity in the present application may be implemented as part of a service layer. The service layer is a middleware layer that supports value-added service capabilities through a set of application programming interfaces (APIs) and underlying networking interfaces. An M2M entity (e.g., an M2M functional entity such as a device, gateway, or service/platform that is implemented on hardware) may provide an application or service. Both ETSI M2M and oneM2M use a service layer that may contain the integrating data entity and semantic entity of the present application. ETSI M2M's service layer is referred to as the Service Capability Layer (SCL). The SCL may be implemented within an M2M device (where it is referred to as a device SCL (DSCL)), a gateway (where it is referred to as a gateway SCL (GSCL)) and/or a network node (where it is referred to as a network SCL (NSCL)). The oneM2M service layer supports a set of Common Service Functions (CSFs) (i.e. service capabilities). An instantiation of a set of one or more particular types of CSFs is referred to as a Common Services Entity (CSE), which can be hosted on different types of network nodes (e.g. infrastructure node, middle node, application-specific node). Further, the integration of data entity and semantic entity of the present application can be implemented as part of an M2M network that uses a Service Oriented Architecture (SOA) and/or a resource-oriented architecture (ROA) to access services such as the integrating data entity and semantic entity of the present application.

[00111] As discussed herein, the service layer may be a functional layer within a network service architecture. Service layers are typically situated above the application protocol layer such as HTTP, CoAP or MQTT and provide value added services to client applications.

The service layer also provides an interface to core networks at a lower resource layer, such as for example, a control layer and transport access layer. The service layer supports multiple categories of (service) capabilities or functionalities including a service definition, service runtime enablement, policy management, access control, and service clustering. Recently, several industry standards bodies, e.g., oneM2M, have been developing M2M service layers to address the challenges associated with the integration of M2M types of devices and applications into deployments such as the Internet/Web, cellular, enterprise, and home networks. A M2M service layer can provide applications r various devices with access to a collection of or a set of the above mentioned capabilities or functionalities, supported by the service layer, which can be referred to as a CSE or SCL. A few examples include but are not limited to security, charging, data management, device management, discovery, provisioning, and connectivity management which can be commonly used by various applications. These capabilities or functionalities are made available to such various applications via APIs which make use of message formats, resource structures and resource representations defined by the M2M service layer. The CSE or

SCL is a functional entity that may be implemented by hardware or software and that provides

(service) capabilities or functionalities exposed to various applications or devices (i.e., functional interfaces between such functional entities) in order for them to use such capabilities or functionalities.

[00112] FIG. 27C is a system diagram of an example M2M device 30, such as an M2M terminal device 18 (which may include eHealthcare Application 202, blood pressure application 206, or the like) or an M2M gateway device 14 for example. As shown in FIG. 27C, the M2M device 30 may include a processor 32, a transceiver 34, a transmit/receive element 36, a speaker/microphone 38, a keypad 40, a display/touchpad 42, non-removable memory 44, removable memory 46, a power source 48, a global positioning system (GPS) chipset 50, and other peripherals 52. It will be appreciated that the M2M device 30 may include any

subcombination of the foregoing elements while remaining consistent with the disclosed subject matter. M2M device 30 (e.g., semantic entity 233 or data entity 231, and others) may be an exemplary implementation that performs the disclosed systems and methods for integrating data entity and semantic entity.

[00113] The processor 32 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of

microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 32 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the M2M device 30 to operate in a wireless environment. The processor 32 may be coupled to the transceiver 34, which may be coupled to the transmit/receive element 36. While FIG. 27C depicts the processor 32 and the transceiver 34 as separate components, it will be appreciated that the processor 32 and the transceiver 34 may be integrated together in an electronic package or chip. The processor 32 may perform application-layer programs (e.g., browsers) and/or radio access-layer (RAN) programs and/or communications. The processor 32 may perform security operations such as authentication, security key agreement, and/or cryptographic operations, such as at the access- layer and/or application layer for example.

[00114] The transmit/receive element 36 may be configured to transmit signals to, or receive signals from, an M2M service platform 22. For example, the transmit/receive element 36 may be an antenna configured to transmit and/or receive RF signals. The transmit receive element 36 may support various networks and air interfaces, such as WLAN, WPAN, cellular, and the like. In an example, the transmit/receive element 36 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another example, the transmit/receive element 36 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 36 may be configured to transmit and/or receive any combination of wireless or wired signals.

[00115] In addition, although the transmit/receive element 36 is depicted in FIG. 27C as a single element, the M2M device 30 may include any number of transmit/receive elements 36. More specifically, the M2M device 30 may employ MIMO technology. Thus, in an example, the M2M device 30 may include two or more transmit/receive elements 36 (e.g., multiple antennas) for transmitting and receiving wireless signals.

[00116] The transceiver 34 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 36 and to demodulate the signals that are received by the transmit/receive element 36. As noted above, the M2M device 30 may have multi-mode capabilities. Thus, the transceiver 34 may include multiple transceivers for enabling the M2M device 30 to communicate via multiple RATs, such as UTRA and IEEE 802.11, for example.

[00117] The processor 32 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 44 and/or the removable memory 46. The non-removable memory 44 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 46 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other examples, the processor 32 may access information from, and store data in, memory that is not physically located on the M2M device 30, such as on a server or a home computer. The processor 32 may be configured to control lighting patterns, images, or colors on the display or indicators 42 in response to whether the integrating data entity and semantic entity in some of the examples described herein are successful or unsuccessful (e.g., semantic operation solicitation request or data analysis and semantic preparation, etc.), or otherwise indicate a status of integrating data entity and semantic entity and associated components. The control lighting patterns, images, or colors on the display or indicators 42 may be reflective of the status of any of the method flows or components in the FIG. 's illustrated or discussed herein (e.g., FIG. 22-FIG. 23, etc.). Disclosed herein are messages and procedures of integrating data entity and semantic entity. The messages and procedures can be extended to provide interface/ API for users to request resource-related resources via an input source (e.g., speaker/microphone 38, keypad 40, or display/touchpad 42) and request, configure, or query subject matter disclosed herein, among other things that may be displayed on display 42. [00118] The processor 32 may receive power from the power source 48, and may be configured to distribute and/or control the power to the other components in the M2M device 30. The power source 48 may be any suitable device for powering the M2M device 30. For example, the power source 48 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.

[00119] The processor 32 may also be coupled to the GPS chipset 50, which is configured to provide location information (e.g., longitude and latitude) regarding the current location of the M2M device 30. It will be appreciated that the M2M device 30 may acquire location information by way of any suitable location-determination method while remaining consistent with information disclosed herein.

[00120] The processor 32 may further be coupled to other peripherals 52, which may include one or more software or hardware modules that provide additional features, functionality or wired or wireless connectivity. For example, the peripherals 52 may include various sensors such as an accelerometer, biometrics (e.g., fingerprint) sensors, an e-compass, a satellite transceiver, a sensor, a digital camera (for photographs or video), a universal serial bus (USB) port or other interconnect interfaces, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.

[00121] The transmit/receive elements 36 may be embodied in other apparatuses or devices, such as a sensor, consumer electronics, a wearable device such as a smart watch or smart clothing, a medical or eHealth device, a robot, industrial equipment, a drone, a vehicle such as a car, truck, train, or airplane. The transmit/receive elements 36 may connect to other components, modules, or systems of such apparatuses or devices via one or more interconnect interfaces, such as an interconnect interface that may comprise one of the peripherals 52.

[00122] FIG. 27D is a block diagram of an exemplary computing system 90 on which, for example, the M2M service platform 22 of FIG. 27 A and FIG. 27B may be implemented.

Computing system 90 (e.g., M2M terminal device 18 or M2M gateway device 14) may comprise a computer or server and may be controlled primarily by computer readable instructions by whatever means such instructions are stored or accessed. Such computer readable instructions may be executed within central processing unit (CPU) 91 to cause computing system 90 to do work. In many known workstations, servers, and personal computers, central processing unit 91 is implemented by a single-chip CPU called a microprocessor. In other machines, the central processing unit 91 may comprise multiple processors. Coprocessor 81 is an optional processor, distinct from main CPU 91, that performs additional functions or assists CPU 91. CPU 91 and/or coprocessor 81 may receive, generate, and process data related to the disclosed systems and methods for integrating data entity and semantic entity, such as determining new semantic triples from hosted data via data mining.

[00123] In operation, CPU 91 fetches, decodes, and executes instructions, and transfers information to and from other resources via the computer's main data-transfer path, system bus 80. Such a system bus connects the components in computing system 90 and defines the medium for data exchange. System bus 80 typically includes data lines for sending data, address lines for sending addresses, and control lines for sending interrupts and for operating the system bus. An example of such a system bus 80 is the PCI (Peripheral Component Interconnect) bus.

[00124] Memory devices coupled to system bus 80 include random access memory (RAM) 82 and read only memory (ROM) 93. Such memories include circuitry that allows information to be stored and retrieved. ROMs 93 generally contain stored data that cannot easily be modified. Data stored in RAM 82 can be read or changed by CPU 91 or other hardware devices. Access to RAM 82 and/or ROM 93 may be controlled by memory controller 92.

Memory controller 92 may provide an address translation function that translates virtual addresses into physical addresses as instructions are executed. Memory controller 92 may also provide a memory protection function that isolates processes within the system and isolates system processes from user processes. Thus, a program running in a first mode can access only memory mapped by its own process virtual address space; it cannot access memory within another process's virtual address space unless memory sharing between the processes has been set up.

[00125] In addition, computing system 90 may contain peripherals controller 83 responsible for communicating instructions from CPU 91 to peripherals, such as printer 94, keyboard 84, mouse 95, and disk drive 85.

[00126] Display 86, which is controlled by display controller 96, is used to display visual output generated by computing system 90. Such visual output may include text, graphics, animated graphics, and video. Display 86 may be implemented with a CRT-based video display, an LCD-based flat-panel display, gas plasma-based flat-panel display, or a touch-panel. Display controller 96 includes electronic components required to generate a video signal that is sent to display 86.

[00127] Further, computing system 90 may contain network adaptor 97 that may be used to connect computing system 90 to an external communications network, such as network 12 of FIG. 27A and FIG. 27B. [00128] It is understood that any or all of the systems, methods and processes described herein may be embodied in the form of computer executable instructions (i.e., program code) stored on a computer-readable storage medium which instructions, when executed by a machine, such as a computer, server, M2M terminal device, M2M gateway device, or the like, perform and/or implement the systems, methods and processes described herein. Specifically, any of the steps, operations or functions described above may be implemented in the form of such computer executable instructions. Computer readable storage media include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, but such computer readable storage media do not include signals per se. As evident from the herein description, storage media should be construed to be statutory subject matter. Computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical medium which can be used to store the desired information and which can be accessed by a computer.

[00129] In describing preferred methods, systems, or apparatuses of the subject matter of the present disclosure - integrating data entity and semantic entity - as illustrated in the Figures, specific terminology is employed for the sake of clarity. The claimed subject matter, however, is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner to accomplish a similar purpose.

[00130] The various techniques described herein may be implemented in connection with hardware, firmware, software or, where appropriate, combinations thereof. Such hardware, firmware, and software may reside in apparatuses located at various nodes of a communication network. The apparatuses may operate singly or in combination with each other to effectuate the methods described herein. As used herein, the terms "apparatus," "network apparatus," "node," "device," "network node," or the like may be used interchangeably. In addition, the use of the word "or" is generally used inclusively unless otherwise provided herein

[00131] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art (e.g., skipping steps, combining steps, or adding steps between exemplary methods disclosed herein). Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

[00132] Methods, systems, and apparatuses, among other things, as described herein may provide for means for determining the likelihood of skin or not skin on a face. A method, system, computer readable storage medium, or apparatus has means for receiving, by a semantic layer from an external entity, a resource access request, wherein the semantic layer is an interface before the data layer; means for responsive to receiving the resource access request, determining, by the semantic layer, that the resource access operation is allowed based on an access control policy of a triplestore; and means for sending, by the semantic layer, the resource access request to a data layer for a resource of the resource access request to be obtained. The method, system, computer readable storage medium, or apparatus has means for receiving, by the semantic layer, a result of the resource access request from the data layer; and means for sending, by the semantic layer to the external entity, a response that comprises the result of the resource access request from the data layer. The external entity may be a common services entity of another apparatus. The method, system, computer readable storage medium, or apparatus has means for linking triples of the triplestore of the semantic layer to a resource of the data layer. The linking may be based on index mapping using a URL The semantic layer and the data layer may be located on the same apparatus. In another example, the semantic layer may be located on a first apparatus and the data layer may be located on a second (e.g., different) apparatus. The processing flow of a request message or other message in this example is through the semantic layer first (when arriving from an external entity) and then, subsequently, the data layer. All combinations in this paragraph (including the removal or addition of steps) are contemplated in a manner that is consistent with the other portions of the detailed description.