Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PUBLISHING DIGITAL CONTENT BASED ON WORKFLOW BASED ASSET MANAGEMENT
Document Type and Number:
WIPO Patent Application WO/2020/240269
Kind Code:
A1
Abstract:
Aspects of the present invention provide computer-based systems, computer-implemented methods, and a non-transitory, computer-readable storage medium suitable for use in publishing digital content items over a network-based environment. The present invention includes a server computing device and an asset end-user computing device in communication with one another over a communication network such as the Internet. The server computing device includes a content publisher, a content manager, a workflow manager, and a communication channel establisher. The content publisher is configured to publish through a user interface on the asset end-user computing device the content items based on transaction information associated with the content items and the asset end-user computing device. The content manager is configured to categorize the content items and configure the same to be searchable. The workflow manager is configured to define a flow combining constants and variables into workflow rules, and to schedule at least one work based on the defined flow. The scheduled work preferably includes workloads, priorities, and assignments.

Inventors:
ESTRELLA RYAN MARK (PH)
SOLIMAN PAUL MARK (PH)
ALBERT ANDREW ANTHONY (PH)
VILLACORTA EDMUNDO (PH)
CAUTON PETER PAUL (PH)
Application Number:
PCT/IB2019/055723
Publication Date:
December 03, 2020
Filing Date:
July 04, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ESTRELLA RYAN MARK (PH)
International Classes:
G06Q10/06; G06Q10/10; G06Q50/16
Domestic Patent References:
WO2015119596A12015-08-13
Foreign References:
US20190019132A12019-01-17
US20190130436A12019-05-02
US20170236524A12017-08-17
US20140358943A12014-12-04
Other References:
JOHN HOPCROFTRAVI KANNAN, COMPUTER SCIENCE THEORY FOR THE INFORMATION AGE, 18 January 2012 (2012-01-18)
Attorney, Agent or Firm:
TUNDAYAG, Edmar (PH)
Download PDF:
Claims:
Claims

1. A system for publishing digital content over a network-based environment based on workflow-based asset management, the system comprising:

a server computing device including a database system into and from which digital content items can be stored and retrieved, respectively; and

an asset end-user computing device in communication with the server computing device via a communication network and operative to receive and publish the content items from the server computing device,

the content items including at least a first content item associated with an asset end-user computing device, a second content item associated with an asset manager computing device, a third content item associated with a service provider computing device, a fourth content item associated with a material source computing device, and a fifth content item associated with a merchant computing device,

the server computing device being configured to determine and store in the database system first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively,

the server computing device including a content publisher configured to publish through a user interface on the asset end-user computing device the second, third, fourth, and fifth content items in an order dependent on a plurality of workflow rules defined by the first content item as input data,

wherein the server computing device further includes

a content manager configured

to categorize the content items according to at least one pre -determined parameter value arranged in the server computing device,

to configure the content items to be searchable according to at least one user- specified parameter value transmitted from the asset end-user computing device, and

to present the content items on the user interface on the asset end-user computing device,

a workflow manager configured

to store the plurality of workflow rules for processing the first, second, third, fourth, and fifth sets of transaction information,

to utilize a plurality of variables within one or more workflow rules of the plurality of workflow rules, each variable of the plurality of variables corresponds to a characteristic of one of the first, second, third, fourth, and fifth sets of transaction information, to define at least one flow combining constants and the plurality of variables into the one or more workflow rules, and

to schedule at least one work based on the defined at least one flow, the scheduled at least one work including workloads, priorities, and assignments, a correlation module included in the server computing device and configured to join data generated through the content manager, and

a communication channel establisher configured to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the asset end-user computing device based upon one or more of a plurality of communication protocols.

2. The system according to claim 1, wherein the workflow manager comprises a work management component for executing work input and work output units defining the scheduled at least one work.

3. The system according to claim 1, wherein the workflow manager comprises a flow management component for executing search and circulation units defining the scheduled at least one work.

4. The system according to claim 1, wherein the server computing device further includes a storage manager operably connected to the workflow manager.

5. The system according to claim 4, wherein the storage manager is configured to store at least a work table, a flow table, a pointer table, a rules table, a transaction information table, and a content item table, any one or more of which are used by the workflow manager to schedule the at least one work.

6. The system according to claim 1, wherein the server computing device further includes a pointer manager operably connected to each of the workflow manager and the storage manager.

7. The system according to claim 6, wherein the pointer manager is configured to identify the address of each one of the first, second, third, fourth, and fifth content items in the storage manager.

8. The system according to claim 1, wherein the server computing device further includes a conditional connection manager operably connected to the workflow, storage, and pointer managers.

9. The system according to claim 8, wherein the conditional connection manager is configured to compare the constants and the plurality of variables to define the at least one flow based upon pointers employed by the pointer manager, the pointers referencing the one or more workflow rules.

10. The system according to claim 1, wherein the workflow manager is further configured to establish a plurality of subroutines, each subroutine of the plurality of subroutines being represented as a separate logical section within the one or more workflow rules referenced from different locations within the storage unit.

11. The system according to claim 1, wherein the workflow manager navigates the one or more workflow rules in response to the input data received by the server computer device from the asset end-user computing device over the communication network.

12. The system according to claim 1, wherein the workflow manager causes the content manager to arrange presentation of the content items based on the defined at least one flow and scheduled at least one work, each being referenced in the one or more workflow rules of the plurality of workflow rules.

13. The system according to claim 1, wherein the server computing device is further configured to cause a payment transaction to be processed and completed based on the scheduled at least one work and in connection with the communication session between the asset end-user computing device and any one or more of the asset manager computing device, the service provider computing device, and the material source computing device.

14. In a system comprising a server computing device including a database system into and from which digital content items can be stored and retrieved, respectively, and a property owner computing device in communication with the server computing device via a communication network and operative to receive and publish the content items from the server computing device, the content items including at least a first content item associated with an asset end-user computing device, a second content item associated with an asset manager computing device, a third content item associated with a service provider computing device, a fourth content item associated with a material source computing device, and a fifth content item associated with a merchant computing device, a computer-implemented method of publishing digital content over a network-based environment based on workflow-based asset management, the method comprising the steps of:

determining and storing in the database system, by the server computing device, first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively;

publishing, by a content publisher included in the server computing device, through a user interface on the property owner computing device the second, third, fourth, and fifth content items in an order dependent on a plurality of workflow rules defined by the first content items as input data;

categorizing, by the content manager included in the server computing device, the content items according to at least one pre-determined parameter value arranged in the server computing device;

configuring, by the content manager, the content items to be searchable according to at least one user-specified parameter value transmitted from the property owner computing device; presenting, by the content manager, the content items on the user interface on the property owner computing device;

storing, by a workflow manager included in the server computing device, the plurality of workflow rules for processing the first, second, third, fourth, and fifth sets of transaction information;

utilizing, by the workflow manager, a plurality of variables within one or more workflow rules of the plurality of workflow rules, each variable of the plurality of variables corresponds to a characteristic of one of the first, second, third, fourth, and fifth sets of transaction information;

defining, by the workflow manager, at least one flow combining constants and the plurality of variables into the one or more workflow rules;

scheduling, by the workflow manager, at least one work based on the defined at least one flow, the schedule at least one work including workloads, priorities, and assignments; and

configuring, by a communication channel establisher included in the server computing device, to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the property owner computing device based upon one or more of a plurality of communication protocols.

15. In a system comprising a server computing device including a database system into and from which digital content items can be stored and retrieved, respectively, and a property owner computing device in communication with the server computing device via a communication network and operative to receive and publish the content items from the server computing device, the content items including at least a first content item associated with an asset end-user computing device, a second content item associated with an asset manager computing device, a third content item associated with a service provider computing device, a fourth content item associated with a material source computing device, and a fifth content item associated with a merchant computing device, a computer-implemented method of publishing digital content over a network-based environment based on prediction model based asset management, the method comprising the steps of:

determining and storing in the database system, by the server computing device, first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively;

publishing, by a content publisher included in the server computing device, through a user interface on the property owner computing device the second, third, fourth, and fifth content items based on the first content item as input data;

categorizing, by a content manager included in the server computing device, the content items according to at least one pre-determined parameter value arranged in the server computing device; configuring, by the content manager, the content items to be searchable according to at least one user-specified parameter value transmitted from the property owner computing device;

presenting, by the content manager, the content items on the user interface on the property owner computing device;

modelling, by a prediction model manager included in the server computing device, the content items using at least one prediction model;

receiving, by the prediction model manager, from the property owner computing device the input data relevant to relevant to the first transaction;

injecting, by the prediction model manager, the input data into the prediction model; generating, by the prediction model manager, a plurality of user-selectable attributes based on the injected input data; and

establishing, by a communication channel establisher included in the server computing device, at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the property owner computing device based upon one or more of a plurality of communication protocols thereby causing to be outputted on the property owner computing device the generated plurality of user-selectable attributes.

16. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement a system for publishing digital content over a network-based environment based on workflow-based asset management, the system comprising: a server computing device including a database system into and from which digital content items can be stored and retrieved, respectively; and

a property owner computing device in communication with the server computing device via a communication network and operative to receive and publish the content items from the server computing device,

the content items including at least a first content item associated with an asset end-user computing device, a second content item associated with an asset manager computing device, a third content item associated with a service provider computing device, a fourth content item associated with a material source computing device, and a fifth content item associated with a merchant computing device,

the server computing device being configured to determine and store in the database system first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively,

the server computing device including a content publisher configured to publish through a user interface on the property owner computing device the second, third, fourth, and fifth content items in an order dependent on a plurality of workflow rules defined by the first content items as input data,

wherein the server computing device further includes

a content manager configured

to categorize the content items according to at least one pre -determined parameter value arranged in the server computing device,

to configure the content items to be searchable according to at least one user- specified parameter value transmitted from the property owner computing device, and

to present the content items on the user interface on the property owner computing device,

a workflow manager configured

to store the plurality of workflow rules for processing the first, second, third, fourth, and fifth sets of transaction information,

to utilize a plurality of variables within one or more workflow rules of the plurality of workflow rules, each variable of the plurality of variables corresponds to a characteristic of one of the first, second, third, fourth, and fifth sets of transaction information,

to define at least one flow combining constants and the plurality of variables into the one or more workflow rules, and

to schedule at least one work based on the defined at least one flow, the schedule at least one work including workloads, priorities, and assignments, and

a communication channel establisher configured to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the property owner computing device based upon one or more of a plurality of communication protocols.

Description:
PUBLISHING DIGITAL CONTENT BASED ON WORKFLOW BASED ASSET

MANAGEMENT

Technical Field

The present invention generally relates to computer-based arrangements for publishing digital content over a network-based environment. More particularly, the present invention relates to such arrangements for publishing digital content based on a workflow based asset management such as a real property management.

Background Art

Uncontrollable as it may seem, the ever-dynamic nature of technology in the computing age has caused computers to become much smaller, lighter and more compact. Computers nowadays function as mobile phones which can be conveniently secured inside a pocket or a bag. A miniaturized computer is commonly provided with mobile-based applications known to provide a wide range of utilities, some of which are e-commerce shopping, mobile advertising, and directory listing.

In respect of various utilities in the field of asset of management such as real property management, the portability aspect of a miniaturized computer makes it advantageous to have digital contents published on it while taking into consideration the pain points of most tenants or end-users of various assets and facilities such as condominium units, townhouses, clubs (e.g., country clubs, sports clubs, and dining clubs), malls, subdivision located houses, rail transport systems, and the like.

When a tenant or an end-user encounters an issue inside her or his unit, such as for example a leaking pipe underneath a sink, the tenant would normally first talk to the building or property manager and await the availability of the property manager. It is worthy of note that most property managers are available only during standard office hours, i.e., from 8:00 in the morning until 5:00 in the afternoon. Once the leaking pipe issue has been reported to the property manager by the tenant, the property manager would then have someone start searching for and contacting a service provider that has the ability and necessary tools to fix the broken pipe.

Once the services of a plumber are engaged after all the exchanges of queries and negotiations (e.g., the tenant asking for the labor and material costs, the property manager arranging for any necessary clearances for the plumber to enter the premises of the property, the plumber asking for the specification of the broken pipes, and the tenant and plumber doing the price negotiations), the plumber would have to schedule his or her visit to the tenant’s unit to finally attempt to fix the broken pipes. All these would probably take a week on the average to complete. A relevant and timely providing of digital content and automated arrangement associated with resolving such issues, among others, in a property management realm would therefore be highly desirable.

On a related note, still in the asset management field, some of these smart-phones and other related devices function as personal assistants that are configured to execute search queries for service providers or merchants, to notify users about scheduled repair works and upcoming payments for outstanding billing statements, and to display other information that may be of interest to a tenant or asset end-user. For example, such a smart-phone may have access to an online calendar of the tenant, and may alert the tenant when a particular repair work in his or her unit is about to begin and what sort materials and tools would have to be used in carrying out the repair work. This kind of work, among others, can form part of a transaction history of the tenant. Even with all the access to information these reminder and transaction history for and of the tenant that smart-phones can provide, they may not at all times prevent the tenant from rendering decisions and performing actions that lead to mistakes. These mistakes include, for example, incompetent service providers, inadequate materials, and improper specification of materials. It would therefore be further highly desirable to have automated arrangements that enable tenants to avoid mistakes and shortcomings in rendering their decisions and performing certain actions in relation to various issues in the field of asset management.

Furthermore, in such automated arrangements as described above, the limited memory and processing capabilities of a miniaturized computer such as smart-phones, tablets, phablets, and the like, which are commonly used by tenants may cause simultaneously running applications to freeze, lose connections with servers, display inapt information, or crash at worst. In some cases, the sensitivity to user gesture of such miniaturized computer is likely to result in an unintended switching from one application to another if separate applications have to be used by the tenant in utilizing such automated arrangements. This in turn is time- consuming and may cause the applications to behave undesirably, providing unreliable data and processes.

Thus, there remains an outstanding need to provide a system for publishing digital content over a network-based environment wherein completion of particular computing tasks in managing various issues associated with an asset such as a real property is not subject to delay or interruption that can possibly be introduced by operating multiple applications on a miniaturized computer having limitations associated with its physical structure and design arrangements.

Summary of the Invention

Aspects of the present invention are generally directed to publishing digital content over a network-based environment. The present invention includes a server computing device and an asset end-user computing device in communication with one another over a communication network. The server computing device includes a content publisher configured to publish through a user interface on the asset end-user computing device content items in an order dependent on a relationship defined by transaction information associated with the content items and the asset end-user computing device.

The system also includes a content manager configured to categorize the content items according to a pre-determined parameter value arranged in the server computing device. The content manager is likewise configured to make the content items searchable according to a user-specified parameter value transmitted from the asset end-user computing device.

The workflow manager is preferably configured (a) to store the plurality of workflow rules for processing the first, second, third, fourth, and fifth sets of transaction information, (b) to utilize a plurality of variables within one or more workflow rules of the plurality of workflow rules, each variable of the plurality of variables corresponds to a characteristic of one of the first, second, third, fourth, and fifth sets of transaction information, (c) to define at least one flow combining constants and the plurality of variables into the one or more workflow rules, and (d) to schedule at least one work based on the defined at least one flow. The scheduled at least one work may desirably include workloads, priorities, and assignments.

The content publisher, content manager, workflow manager, and communication channel establisher provide application-specific utilities which can be operated to deliver more relevant and timely digital contents.

In one further aspect of the present invention, the herein disclosed server computing device may also include a prediction model manager that is configured (a) to model the content items using at least one prediction model, (b) to receive from the asset owner computing device the input data relevant to relevant to the first transaction, (c) to inject the input data into the prediction model, and (d) to generate a plurality of user-selectable attributes based on the injected input data.

The limiting factors associated with the physical structure of the asset end-user computing device, which may include its processing capabilities, the amount of its memory, and the sensitivity of its user gesture detection mechanisms, do not impose restrictions on operating said utilities since the aspects of the invention provide a single platform for accessing required information and functionalities by a user or tenant desiring to complete a particular task in the most efficient manner possible.

The provision of such single platform for searching for relevant content items, preferably through the prediction model of the prediction model manager, and also for communicating with sources of the content items ensures that the utilities and services associated with asset management are readily accessible to the end-user on a typically small screen size of the asset end-user computing device and can be operated desirably.

The provision of the same single platform also ensures that the rather limited memory of the asset end-user computing device can manage the computing processes associated with the abovementioned utilities and services, and that an unintended switching from one application to another in order for the end-user to locate and make use of these utilities and services, among other functionalities as the case may be, can be prevented.

For a better understanding of the invention and to show how the same may be performed, preferred embodiments and/or implementations thereof will now be described, by way of non-limiting examples only, with reference to the accompanying drawings.

Brief Description of the Several Views of the Drawings

Figure 1 is a block diagram of a communication architecture of a system for publishing digital content over a network-based environment according to one or more implementations of the invention.

Figure 1-A is a block diagram illustrating further exemplary components of a server computing device suitable for use in the system illustrated in Figure 1.

Figure 1-C is a flow diagram illustrating an exemplary process for achieving a workflow based asset management, according to some implementations. Figure 1-E is a flow diagram illustrating an exemplary process for determining a work based on a flow illustrated in Figure 1C, according to one implementation.

Figure 1-G is a block diagram illustrating an exemplary high-level enterprise architecture in accordance with one or more aspects of the invention.

Figure 1-1 is a block diagram illustrating various modules suitable for use in one or more aspects of the workflow based asset management of the invention.

Figure 2 is a flow diagram of an exemplary content arrangement routine for use in the system of Figure 1.

Figure 3 is a flow diagram of an exemplary content publishing process for use in the system of Figure 1, and in some aspects of the invention.

Figure 4 is a sequence diagram of an exemplary operation for use in the system of Figure 1, and in some aspects of the invention.

Figure 5 is an exemplary hardware architecture suitable for use in the system of Figure 1, and in some aspects of the invention.

Figure 6 is a data flow diagram illustrating an exemplary operation associated with the prediction model executing on the environment of FIG. 1 in accordance with one or more implementations of the invention.

Figure 7 is a flow diagram illustrating an exemplary process for utilizing one or more prediction models and particularly some model parameters used in such prediction models in accordance with one or more implementations of the invention.

Figure 8 is a block diagram illustrating an exemplary prediction model based on matrix factorization in accordance with one or more implementations of the invention and their respective implementations.

Figure 9 is a user interface displaying an exemplary login page from a content manager or application suitable for use in one or more implementations of the invention and their respective implementations.

Figure 10 is a user interface displaying an exemplary form to be fdled with account details suitable for use in one or more implementations of the invention.

Figures 11 and 12 are user interfaces collectively displaying an exemplary dashboard suitable for use in one or more implementations of the invention.

Figures 13, 14, 15, 16, and 17 are user interfaces collectively displaying an exemplary processing of a transaction such a real estate service request suitable for use in one or more implementations of the invention.

Figure 18 is a user interface displaying exemplary status information based on a successful transaction suitable for use in one or more implementations of the invention and consistent with aspects thereof. Figure 19 is a user interface displaying graphical representations of exemplary asset management data consistent with one or more aspects of the invention.

Figure 20 is a user interface displaying various human-readable information relating to asset management consistent with one or more aspects of the invention and their respective implementations.

Figure 21 is a set of user interfaces displaying an exemplary management of various activities doable through the system illustrated in Figure 1 and consistent with one or more aspects of the invention.

Figure 22 is a set of user interfaces displaying an exemplary management of various facilities associated with an asset through the system illustrated in Figure 1 and consistent with one or more aspects of the invention and their respective implementations.

Figure 23 is a user interface displaying an exemplary management of an issue associated with a facility through the system illustrated in Figure 1 and consistent with one or more aspects of the invention.

Figure 24 is a user interface displaying an exemplary management of maintenance data associated with an asset through the system illustrated in Figure 1 and consistent with one or more aspects of the invention.

Figure 25 is a user interface displaying an exemplary centralized management of account information through the system illustrated in Figure 1 and consistent with one or more aspects of the invention.

Figure 26 is a set of user interfaces displaying an exemplary management of issues, visitors, and incoming packages through the system illustrated in Figure 1 and consistent with one or more aspects of the invention.

Figure 27 is a user interface displaying an exemplary asset information page suitable for use in one or more implementations of the invention.

Figure 28 is a user interface displaying an exemplary calendar page suitable for use in one or more implementations of the invention.

Figure 29 is a user interface displaying an exemplary report submission page suitable for use in one or more implementations of the invention.

Figure 30 is a user interface displaying an exemplary visitor information page suitable for use in one or more implementations of the invention.

Figure 31 is a user interface displaying an exemplary incoming package page suitable for use in one or more implementations of the invention.

Detailed Description of the Preferred Implementations

All the ensuing disclosures and illustrations of the preferred implementations and/or embodiments of one or more aspects of the present invention, along with one or more components, features or elements thereof, are merely representative for the purpose of sufficiently describing the manner by which the present invention may be carried out into practice in various ways other than the ones outlined in the ensuing description.

It is to be understood and appreciated, however, that the exemplary implementations used to describe how to make and use the one or more aspects of the present invention may be embodied in many alternative forms and should not be construed as limiting the scope of the appended claims in any manner, absent express recitation of those features in the appended claims. All the exemplary drawings, diagrams and illustrations accompanying the ensuing description should also not be construed as limiting the scope of the appended claims, as accompanied by this specification, in any manner.

Unless the context clearly and explicitly indicates otherwise, it is also to be understood that like reference numerals refer to like elements throughout the ensuing description of the figures and/or drawings of the present disclosure, that the linking term“and/or” includes any and all combinations of one or more of the associated listed items, that the singular terms“a”, “an” and“the” are intended to also include the plural forms, and that some varying terms or terminologies of the same meaning and objective may be interchangeably used throughout the ensuing disclosure of the present invention.

Referring to Figure 1, there is shown a block diagram illustrating a communication architecture of a computer-based system for publishing digital content over a network-based environment according to one or more implementations of the present invention, and consistent with one or more aspects of the same.

The computer-based system is generally designated by reference numeral 100 throughout the ensuing description of preferred implementations of the present invention. The system 100 primarily includes a server computing device 102 and an asset end-user computing device 104 which are capable of communicating with one another over a communication network 106.

The communication network 106, capable of transmitting, receiving, and processing various kinds of data, may include one or more networks interlinked together so as to provide internetworked communications between computing devices such as those characterized by the server computing device 102 and the asset end-user computing device 104. One or more public or private packet-switched networks may also characterize the communication network 106. Preferably, the communication network 106 is the well-known Internet.

Alternatively, the communication network 106 may include one or more Ethernet connections or similar private connections utilizing the Transmission Control Protocol/Intemet Protocol (TCP/IP), among others. The communication network 106 preferably supports IPv6 (Internet Protocol version 6) addressing so that each computing device connected to it, such as the tenant, asset owner, or asset end-user computing device 104, may have a unique IP (Internet Protocol) address.

The asset end-user computing device 104 may include and may be in operative communication with a database system 108 into which digital content items can be stored and from which the same content items can be retrieved. The server computing device 102 may include a processor 110, a memory system or storage medium or storage unit 112, a set of computer-executable and computer-readable instructions 114 stored in the memory system 112, and a communication interface 116.

As may be used herein, the terms“connected to,”“connecting,”“communicating,”“in communication with,” “in operative communication with,” “interconnected,” or “interconnecting” may include direct connection/ communication, indirect connection/communication and/or inferred connection/communication between devices/apparatuses/computers. The direct connection/ communication may be provided through one or more hardware, software, firmware, electronic and/or electrical links between devices/apparatuses. The indirect connection/communication may be provided through an intervening member such as a component, an element, a circuit, a module, a device, a node device, and an apparatus between or among devices/apparatuses. The inferred connection/ communication, as may be used herein, may be characterized by one device/apparatus being connected to or in operative communication with another device/apparatus by inference, and may include direct and indirect connections/communications.

The processor 110 may be operable to execute or perform the computer-executable instructions 114 from the memory system 112 to generate and configure the content items for publication or display on a display screen or user interface of the asset end-user computing device 104 over the communication network 106.

The processor 110 may be any commercially available general processor, a custom general processor, a special-purpose processor, or an embedded processor. The processor 110 may also be implemented as a multi-core processor such as a coprocessor having a significant number of cores operably and commercially available.

The processor 110 may be embodied by a single data processing device or combinations of multiple data processing devices connected to the communication network 106 which may be a local network, cloud computing network or a distributed processing network. The communication network 106 may be wired or wireless, according to some implementations.

The computer-executable instructions 114 are stored on the memory system 112 that is a computer-readable medium and is in operative communication with the processor 110. For brevity, the computer-executable instructions 114 may hereinafter be referred to as "software," and the memory system 112 as "memory."

The software 114 may be executed by the processor 110 as machine codes in a direct manner or as scripts in an indirect manner, and may be stored on the memory system 112 as source codes, object codes, or any other suitable computer-readable and executable format.

The software 114 may include a set of routines, a set of functions, a set of modules, a set of scripts, a set of processes, or the like, each of which may be composed using any programming language such as C, C++, C# (C-Sharp), Java, Javascript, Perl, Ruby, Python, PHP, ABAP, Objective-C, Matlab, Tel, BuildFire.js, and the like.

It is to be understood and appreciated by a skilled person or a person having ordinary skills in the art to which the present invention belongs that the herein described modules are merely presented in segregated format based on their intended functions for the sake of illustrating how they are relevant to the implementations and/or embodiments of the aspects of the present invention. The herein described modules are merely illustrative and can be fewer or greater in number, as it is well known in the art of computing that such program codes representing various functions of different modules can be combined or segregated in any suitable but efficient manner in terms of software execution.

Furthermore, the term“module” as used herein may refer to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable and re-configurable components that provide the described functionalities.

The memory system 112 may be one of or a combination of any of a random-access memory, a read-only memory, a flash memory, an external memory, a hard drive storage device, an optical disk drive storage device, a magnetic disk drive storage device, and a solid- state drive storage device. The memory system 112 may be a non-transitory, computer-readable storage medium.

It is to be understood and appreciated that, while the software 114 is embodied to be stored on the memory 112, the same software 114 may be desirably, or whenever necessary, loaded directly on the processor 110 of the server computing device 102.

It is also to be understood and appreciated that one or more components or the entire components of the software 114 may be localized on a single computing device, as characterized by the processor 110, or distributed across multiple computing devices in communication with one another through wired or wireless connections which are well-known in the art.

The software 114 may include a content publisher 118, a content manager 120, a workflow manager 122, and a communication channel establisher 124. These components of the software 114 may be provided as modules, engines, or combinations of modules and engines, and could be implemented with functional elements that are similar, substitute, or in addition thereto, depending on desired configurations.

It is to be understood and appreciated that the illustrated content publisher 118, content manager 120, workflow manager 122, and communication channel establisher 124 of the system 100 for publishing digital content over a network-based environment of the present invention may or may not correspond to discrete blocks of software codes, depending on how they are arranged.

It can readily be realized that the software functions to be described in the ensuing disclosure can be performed by executing various code portions which are stored on one or more non-transitory computer-readable media characterized by the memory system 112. These software functions are grouped into the content publisher 118, the content manager 120, the workflow manager 122, and the communication channel establisher 124 so as to illustrate the extent of the aspects of the present invention and their respective implementations.

The software 114, in part or as a whole, may be compiled into an application such as a mobile-based application or a web-based application to ensure its availability and accessibility on various computing devices such as mainframe computers, desktop computers, laptop computers, notebook computers, tablet computers, and smartphones preferably characterizing the asset end-user computing device 104 and having the capability to connect with other state- of-the-art, modem computing devices through the communication network 106.

The asset end-user computing device 104, which is in communication with the server computing device 102 via the communication network 106, is operative to receive and publish the content items from the server computing device 102. The content items may be in the form of still image data, dynamic image data, video data, audio data, audio/video data, HTML (HyperText Markup Language), XML (extensible Markup Language), APK (Android Package), ipa (iOS App Store Package), metadata, or any suitable combination thereof.

For the purpose of illustrating the extent of each one of the aspects of the present invention, the content items are illustrated to include at least a first content item, a second content item, a third content item, a fourth content item, and a fifth content item, all of which can be stored into and retrieved from the database system 108.

It is also to be understood and appreciated that, in any portion or portions of the herein disclosure, the use of ordinal terms such as“first,”“second,”“third,” and so forth, is used herein to distinguish elements, features, components, calculations or steps from one another and should not also be construed as limiting the scope of the appended claims, and that these and such other ordinal terms that may appear in the ensuing description of the one or more aspects of the present invention are not indicative of any particular order of elements, features, calculations, components or steps to which they are attached. For example, a first element could be termed a second element or a third element. Similarly, a second element could be termed a first element or a third element. All these do not depart from the scope of the herein disclosure and its accompanying claims.

In preferred implementations, the first content item may be provided from an asset owner or user entity using the asset end-user computing device 104. The second content item may be provided from an asset manager entity using an asset manager computing device 126. The third content item may be provided from a service provider entity using a service provider computing device 130. The fourth content item may be provided from a material source entity using a material source computing device 128. The fifth content item may be provided from a merchant entity using a merchant computing device 132. The terms“property manager” and “asset manager” may interchangeably be used throughout the herein disclosure. Correspondingly, the terms“end-user” and“tenant” may interchangeably be used throughout the herein disclosure of the present invention.

Data communications between the asset end-user computing device 104 and any one or more of the asset manager computing device 126, the service provider computing device 130, the material source computing device 128, and the merchant computing device 132 can be performed over the communication network 106.

The server computing device 102 is preferably configured to determine and store in the database system 108 a first set of transaction information associated with the first content item and generatable using the asset end-user computing device 104, a second set of transaction information associated with the second content item and generatable using the asset manager computing device 126, a third set of transaction information associated with the third content item and generatable using the service provider computing device 130, a fourth set of transaction information associated with the fourth content item and generatable using the material source computing device 128, and a fifth set of transaction information associated with the fifth content item and generatable using the merchant computing device 132, all with respect to the data communication, processing, storage, parsing, and handling capabilities, among others, of the server computing device 102.

The database system 108 may be characterized by MySQL (an open-source relational database management system), MongoDB (an open-source NoSQL database), PostgreSQL (an object-relational database management system (ORDBMS), or Redis (an in-memory database). Any standard XML (Extensible Markup Language) query language used by most XML databases may also constitute the database system 108 in some implementations and aspects of the present invention.

The content publisher 118 in the server computing device 102, when executed by the processor 110 from the memory 112, is configured to publish through a user interface on or of the asset end-user computing device 104 the first, second, third, fourth, and fifth content items in an order dependent on a relationship defined by the first, second, third, fourth, and fifth sets of transaction information with which they are associated.

By way of examples and not by way of limitation, the relationships defined by the first and second sets of transaction information may be any of the following: (a) the asset end-user computing device 104 is used by the tenant to access the app 120 to request for installation of a split-type aircon in the tenant’s condominium unit whereby the tenant’s request may constitute the first content item; (b) the property manager receives and is notified of tenant’s request through his or her asset manager computing device 126, then the tenant as may be part of his or her job may confirm the request and start sourcing out the requested material which is the aircon unit and a service provider such as an installer who can do all the necessary installation procedures whereby the property manager’s confirmation, among others, may constitute the second content item; (c) each of the material source through the material source computing device 128 and the service provider through the service provider computing device 130 may receive the inquiry from the property manager through the asset manager computing device 126, and can make offers and finally confirmations whereby information relating to the service provider and to the material source may constitute the third and fourth content items, respectively; (d) if there are required items which cannot be provided by any of the service provider and the material source, the property manager may communicate with any merchant operating the merchant computing device 132 in pursuit of searching for the unavailable items whereby information relating to the merchant may constitute the fifth content item; and (e) once the property manager has received all the required information and confirmations from the service provider and the material source and, if the situation requires, the merchant in relation to a purchase request and/or purchase order, the property manager may then schedule the work to be performed for the benefit of the tenant who requested the same work. The tenant may then be notified of the scheduled work through the app 120 executing on the end-user computing device 104 and connected to the server computing device 102.

It is be understood and appreciated that the app 120 may be configured as a single application (e.g., distributed as a single, stand-alone application), or may be provided as separate applications. As a single application, the app 120 may be provided with a single tracking URL or a single redirection URL which, in some implementations, eventually leads each of the end-user/tenant, the property manager, the service provider, the material source, and the merchant to an app's download screen, such as but not limited to a link to an App store URL, or Google Play URL, or a some other URL that redirects to the App store URL or Google Play.

As separate applications, an arrangement which is preferred according to one or more aspects of the present invention, the app 120 may be configured into different forms. For example, a first app may be customized for the end-user/tenant, and may be provided with a tracking URL or a redirection URL that is unique for end-users/tenants (or a“tenant app”). A second app may be customized for the property manager, and may be provided with a tracking URL or a redirection URL that is unique for property managers (or a“property manager app”). A third app may be customized for the service provider, and may be provided with a tracking URL or a redirection URL that is unique for service providers (or a“service provider app”). A fourth app may be customized for the material source or supplier, and may be provided with a tracking URL or a redirection URL that is unique for material sources or suppliers (or a “material source app” or“supplier app”). A fifth app may be customized for the merchant, and may be provided with a tracking URL or a redirection URL that is unique for merchants or suppliers of various goods for example (or a“merchant app”).

The functionalities of each of the first, second, third, fourth, and fifth apps may be invoked and utilized by the tenant, property manager, service provider, material source, and merchant, respectively, from any suitable server-side points, e.g., via any suitable APIs (application programming interfaces) on the side of the server computing device 102 It is to be understood and appreciated that any usage by any third-party entity (an entity which may or may not be distinct from herein disclosed entities and/or parties) of such APIs, which are anchored to system 100 of the present invention, may be covered by the accompanying claims thereof.

A correlation module (not illustrated) may be configured and included in the server computing device 102 to“join” various data generated generally through the content manager or app 120 and more specifically through the tenant app, property manager app, service provider app, material source app, and merchant app, and to consequently create unified information ready for publication, analysis, reporting, modelling, and visualization whenever possible and applicable. The herein disclosed aspects and implementations of the present invention may include a B2B (business-to-business) module operable to manage at least one community consisting of two or more business entities or B2B partners (of the same or different nature/type) which or who are interacting with one another through the present system 100 acting as a central hub for exchanges of communications and data through a plurality of B2B based software applications. For example, a parking management module (designated by reference identifier“PI” in Figure 1-1) may be created in a custom-specific or preconfigured manner for a particular asset such as a building like that which operates as a shopping mall. The manner in which this particular asset is managed may be in any form of arrangement such as, by way of examples, contracts and agreements.

In some preferred implementations, the system 100 includes a customer relationship module (not illustrated) which provides case management fields and features that can be adapted for use in managing any kind of asset including managing real property. The herein disclosed implementations may enable the end-user to manage his or her profile, to see what the materials, services, schedules, and workflows look like from the comfort of his or her current location such as home, look at user statistics and other data, and manage notification settings. The property manager can utilize the modules of the system 100 of the present invention to manage system’s own customer relationship management (CRM) databases via email, and manage various transactions including service requests using Internet-based tools.

In one preferred implementation, aside from searching and viewing of various information related to the fifth transaction between the merchant and any one or more of the tenant, property manager, service provider, and material source, the merchant app may mainly include: (a) a purchasing interface configured to provide one or more options for an user— which may be any one of the end-user or tenant, the property manager, the service provider, and the material source— of the merchant app to purchase goods and/or services from the merchant; (b) a listing of goods and/or services and their respective specifications and information; and (b) a digital wallet plugin configured to initiate a cross device digital or electronic payment for the fifth transaction.

In one preferred implementation, aside from searching and viewing of various information related to the fourth transaction between the material source and any one or more of the tenant, property manager, service provider, and merchant, the material source app may mainly include: (a) a purchasing interface configured to provide one or more options for a user— which may be any one of the tenant, the property manager, the service provider, and the merchant— of the material source app to purchase materials from the material source; (b) a listing of materials and their respective specifications; and (c) a digital wallet plugin configured to initiate a cross device digital or electronic payment for the fourth transaction.

In one preferred implementation, aside from searching and viewing of various information related to the third transaction between the service provider and any one or more of the tenant, property manager, material source, and merchant, the service provider app may mainly include: (a) a purchasing interface configured to provide one or more options for a user— which may be any one of the tenant, the property manager, the material source, and the merchant— of the service provider app to purchase goods from the service provider; (b) a listing of services and their respective specifications; and (c) a digital wallet plugin configured to initiate a cross device digital or electronic payment for the third transaction.

In one preferred implementation, aside from searching and viewing of various information related to the second transaction between the property manager and any one or more of the tenant, service provider, material source, and merchant, the property manager app may mainly include: (a) an interface configured to receive property issues and service requests data associated with the issues and requests of the tenant, wherein the property issues and requests data include their respective statuses generated in response to the determination that the scheduled works associated with such issues and requests are resolved and/or concluded; (b) an interface configured to receive, confirm, process, and terminate transactions of various kinds; and (c) an interface configured to analyze and/or visualize the property issues and requests data. By way of example, Figure 19 shows a user interface displaying graphical representations of exemplary asset management data consistent with one or more aspects of the present invention through which an overview of issues (or end-users’ requests) in several asset or buildings is graphically provided for users.

In one preferred implementation, the property manager app may include various management modules such as those which are illustrated in Figure 1-1 and that are suitable for use in the workflow based asset management of the present invention and consistent with one or more aspects of the present invention as disclosed herein and may be described in greater details.

Through the first module which is the facility management module“F,” the following can be managed by the property manager: property locations, property identifiers, property descriptions, building permits, electrical permits, sanitary permits, architectural design plans, structural design plans, clearances from various government agencies, bill of materials and specifications, tax declarations, tax payment receipts, titles, transfer certificates of titles, property purchase information, property lease information, contracts of sale, contracts of lease, service contract information, contract expirations, maintenance schedules, tagging of incoming delivery, insurance policies, repairs, renovations, security rules, house rules, violations of rules, and the like. In Figure 28, there is shown a user interface displaying an exemplary calendar page suitable for use in one or more implementations of the present invention. The calendar page may be suitable for use in facility booking.

Through the second module which is the tenant management module“T,” the following can be managed by the property manager: tenants’ full legal name, tenants’ copies of government-issued identification cards (e.g., social security IDs, driver’s licenses, and passports), property purchase contract, property lease contract, property information occupied by the tenants, tenants’ contact information including mobile phone numbers and e-mail addresses, proof of income, credit reports, credit scores, and tenants’ modes of payment, and the like.

Through the third module which is the billing management module“B,” the following can be managed by the property manager: meter information associated with electric energy consumption, meter information associated water consumption, meter information associated with network bandwidth usage, invoices for tenants occupying properties under lease terms, invoices for tenants who have job orders, any financial transaction record associated with the properties, and the like.

Through the fourth module which is the issue management module“I,” the following can be managed by the property manager: issue creation (including for example, issue name, issue description, tenant’s affected, and how the tenant is affected), issue validation, issue updating, issue termination, and variables and constants associated with issues such as parameters identified to possible resolve the issues and fulfil requests, parameters that actually resolved similar issues in the past, which structural parts are affected, timeline, and the like. Figure 29 shows a user interface displaying an exemplary report submission page suitable for use in one or more implementations of the present invention. The property manager may use the information provided by the tenant through this report submission page in dealing with issues that come with the submitted report by the tenant.

Through the fifth module which is the parking management module“PI,” the following can be managed by the property manager: registered owner of each pre-owned parking space, registered lessee of each leased parking space, available parking space updating, maintenance of parking areas and/or spaces, RFID (radio frequency identification) tag identifiers, RFID tagging, vehicle speed monitoring if and when warranted by regulations formed by property associations, and the like. Through the sixth module which is the people management module“P2,” the following can be managed by the property manager: contracts of employees and staff, salary values associated with employees and staff, job descriptions of employees and staff, task monitoring, personal data of employees and staff, timekeeping system monitoring, violation of house and/or security rules of employees and staff, attendance monitoring, job delegation according to priorities, and the like.

Through the seventh module which is the visitor/packages management module“V” the following can be managed by the property manager: primary visitor information such as visitor information, such as name, purpose of visit, and name of the tenant to be visited, and package information such as tracking number, courier service, and package type. The primary visitor information may be pre-registered by the tenant using the asset end-user computing device 104. In this way, the visitor or the package is already considered pre-cleared upon, respectively, entering or reaching the premise of the asset, a condominium for example, in which the tenant or the person to be visited resides or is located. In Figure 26, there is shown a set of user interfaces displaying an exemplary management of issues, visitors, and incoming packages through the system 100 illustrated in Figure 1 and consistent with one or more aspects of the present invention and one or more implementations thereof.

Figure 30 shows a user interface displaying an exemplary visitor information page that is suitable for use in one or more implementations of the present invention while Figure 31 shows a user interface displaying an exemplary incoming package page that is suitable for use in one or more implementations of the present invention. In Figure 30, the tenant may use the tenant app to supply the visitor information which, as may be illustrated, may include the visitor’s full name, contact number, address, and purpose of visit, all of which have corresponding date and time stamps once they are submitted by the tenant. In Figure 31, the tenant may use the tenant app to supply the package information which, as may be illustrated, may contain the sender’s name, package name, package details, delivery flow (income or outgoing), and delivery type (door-to-door or to lobby of an asset such as condominium), all of which have corresponding date and time stamps once they are submitted by the tenant.

Auxiliary visitor information may be collected to analysis purposes, and these information may include, by way of examples, e-mail address, home phone number, mobile phone number, work phone number, profession, income level, age, family size, age of children, and the like, the collection and storage of which must be in compliance with any national laws which protect personal information through various data privacy safeguards (e.g., the Philippine Republic Act No. 10173 or otherwise known as the Data Privacy Act of 2012, Article 8 of the European Convention on Human Rights, and United States’ Children's Online Privacy Protection Act of 1998).

Through the eight module which is the payment management module “P3,” the following can be managed by the property manager: initiation, implementation, and completion of a payment transaction process according to a total price of goods and/or services as requested by the tenant and as received and confirmed by the property manager, and as fulfilled by any one or more of the service provider, material source, and merchant, to receive a payment transaction authorization, and to responsively issue a digital signal of a payment confirmation to confirm that payment has been successfully completed. Revenues derived from payments may be processed in accordance with various revenue models that may be applicable to the herein disclosed system 100 (e.g., commissions from sale and lease of units, whether residential or commercial, revenue shares from the service providers, service fees from usage of the herein disclosed platform by other software vendors through APIs, and franchise fees).

In some implementations, the herein disclosed“apps” and“modules” may generally refer to software applications and may specifically refer to a native application, a desktop- based application, a cloud-based application, an augmented reality application, or a kiosk- based application, and may further refer to an executable computer software program or software application program that enables services and content items associated with one or more implementations of the herein disclosed present invention and suitable for use in collecting and managing various information.

The content manager or the“app” 120 in the server computing device 102, when executed by the processor 110 from the memory 112, is configured to categorize the content items (i.e., including any one or more of the first, second, third, fourth, and fifth content items) according to a pre-determined parameter value arranged in the server computing device 102, and to make the items searchable according to a user-specified parameter value transmitted from the asset end-user computing device 104. The user-specified parameter value may correspond to a "keyword" that can be used to search for specific content item in the database system 108.

The pre-determined parameter value arranged in the server computing device 102 and in respect of which the content items are categorized may be any of service provider, material source, or merchant name information (e.g., ABC Plumbing Services, XYZ Electrical Supplies and General Merchandise, JRS Animal Clinic), business transaction information (e.g., country, region, city, street, zip code), service provider, material source, or merchant type information (e.g., manufacturing, logistics, information and communication technology, house builders, publishing), service provider, material source, or merchant category information (e.g., fast- food restaurants, animal clinics, law offices, real estate), and service provider, material source, or merchant sub-category (e.g., Japanese cuisine, Chinese cuisine, Mediterranean Cuisines Thai Cuisines).

It is to be understood and appreciated that one or more portions of aspects and implementations of the present invention may be adapted and suitable for use in various assets and facilities such as condominium units, townhouses, clubs (e.g., country clubs, sports clubs, dining clubs), shopping malls, subdivision located houses, rail transport systems, road transport systems, air transport systems, office buildings or corporate office buildings, warehouse, schools, hotels, factories, manufacturing facilities, hospitals, church buildings, leisure parks, parking buildings, government buildings, government facilities such as laboratories and research centers, restaurants, chains of restaurants, industrial facilities, district sites, training facilities, libraries, storage facilities, museums, stores, chains of stores, and the like.

The workflow manager 122 in the server computing device 102, when executed by the processor 110 from the memory system 112, is preferably configured to firstly store the plurality of workflow rules 176 (as described in greater details in Figure 1-C of the drawings) for processing the first set of transaction information, the second set of transaction information, the third set of transaction information, the fourth set of transaction information, and the fifth set of transaction information. In one or more implementations, the workflow manager 122 may also be configured to utilize a plurality of variables within one or more workflow rules 176 of the plurality of workflow rules 176. Each variable of the plurality of variables may correspond to a characteristic of and/or attribute associated with any one, or any one or more, of the first set of transaction information, the second set of transaction information, the third set of transaction information, fourth set of transaction information, and the fifth set of transaction information.

The herein disclosed plurality of variables may include, by the way of examples and not by way of limitation, and in some implementations, material cost“A” (e.g., 150 U.S. Dollars for a countertop sink, material cost“B” (e.g., 10 U.S. Dollars for a pipe), material cost “C” (e.g., 30 U.S. Dollars for the faucet,..., material cost“N,” labor cost“A” (e.g., 20 U.S. Dollars for the installation of countertop sink), labor cost“B” (e.g., 10 U.S. Dollars for the installation of pipe, labor cost“C” (e.g., 30 U.S. Dollars for the faucet replacement), ...,labor cost“N,” input data“A” (e.g., service request type), input data“B” (e.g., unit number), input data“C” (e.g., service request description),..., input data“N,” payment instrument“A” (e.g., credit card) payment instrument“B” (e.g., debit card), and payment instrument“C” (e.g., prepaid card),...,payment instrument“N,” and tenant feedback“A” (e.g., satisfied), tenant feedback “B” (unsatisfied), tenant feedback “C” (e.g., custom message for the feedback),..., tenant feedback“N.” The feedback may be in the form of customizable message, qualitative rating, or quantitative rating. Any one of the end-user, the property manager, the service provider, the material source, and the merchant may be provided with rating.

In one or more implementations, the workflow manager 122 may also be configured to define at least one flow combining constants and the plurality of variables into the one or more workflow rules 176. The herein disclosed constants may include, by the way of examples and not by way of limitation, transaction date, transaction time, currency formats, flow rate reading for the waterflow from a digital meter, and standard specification of the countertop sink and pipe, among others.

In one or more implementations, the workflow manager 122 may also be configured to schedule at least one work based on the defined at least one flow. Preferably, the scheduled at least one work includes workloads, priorities, and assignments. To avail all these, the tenant must be registered with the herein disclosed system and associated subsystems including the content manager or app 120 and use his or her registration credentials to login to the system. For example, there is shown in Figure 9 a user interface which displays an exemplary login page from the content manager or app 120, and in Figure 10 a user interface which displays an exemplary form to be filled with account details, both of exemplary interfaces are suitable for use in one or more implementations of the present invention.

As used herein, the term“workflow” may refer to a combination of forms and/or fields that represent one or more units of a work and/or a set of activities to be performed to generally initiate and fulfill a transaction such as replacement of a sink structure in a kitchen. As used herein, the term“flow” may refer to a process indicative of the entry of and/or change in the information contained in the forms and/or fields, and/or to a movement one or more activities associated with the set of activities within the process. The forms, fields, and activities are preferably associated with one another for one particular workflow into which the variables and constants are combined. In some implementations, the flow may be a tree of pre determined processes that a property manager manipulates to cause a transaction, such as a sink installation request, to be initiated, processed, and eventually fulfilled. In Figure 21, there is shown a set of user interfaces displaying an exemplary management of such activities doable through the system 100 illustrated in Figure 1 and consistent with one or more aspects of the present invention.

Exemplary information associated with the herein disclosed work and flow are depicted in Figures 11 and 12 which are user interfaces collectively displaying an exemplary dashboard suitable for use in one or more implementations of the present invention. The interfaces in Figures 11 and 12 show, specifically and respectively, the services and facilities that can be booked by the tenant and a drop-down menu that can be navigated by the tenant to avail various services. The menu, for example, may include active unit, order history, issue logs, my visitors, my packages, track, and news.

As used herein, the term“workloads” may refer to the volumes of transactions that the property manager has to manage and that occur over some intervals of time, e.g., for one scheduling interval to another. The high occurrence of the transactions during such intervals of time, the heavier the workloads are considered by the system. This is when the priorities become important. The property manager may determine priorities which may refer to the relative order of transactions that the property manager receives within a certain period of time.

The above-described priority-based scheduling of the work to be performed for the tenant and to be managed by the property manager may apply various queueing disciplines such as“first-come, first-serve,”“shortest-job-first,” and“earliest-deadline-first,” to name a few. In the first-come, first serve basis, the list of scheduled works may be sorted in the order of arrival of transaction requests. In the shorted-job-first scheduling, the list of scheduled works may be sorted by service times of waiting jobs. In the earliest-deadline-first scheduling, the list of scheduled works may be sorted based on their deadlines. These scheduling means may be applied by the property manager for each set of transactions.

As used herein, the term“assignments” may generally refer to the process of assigning one or more competent personnel and/or service provider to the works to be performed and may specifically refer to the relationships between the personnel availabilities and the schedules of the works to be performed for tenants.

In some implementations of the present invention, the tenant’s experience may be fulfilled when the building managers (from in-house) attend to the in-scope services as may be required by the tenants or property owners, or when the service providers (from a pool of service providers) attend to the out-of-scope services as may be required by the tenants or property owners.

An exemplary workflow is graphically depicted in Figures 13, 14, 15, 16, and 17 which show user interfaces collectively displaying an exemplary processing of a transaction such a real estate service request suitable for use in one or more implementations of the present invention. In Figure 13, which is substantially the same as Figure 11, the services and facilities are displayed as main categories of objects and items associated with one or more aspects of the present invention. The tenant may browse through these services and facilities and their respective embedded descriptions or details. One of the illustrated selectable options is “electrical service.” In Figure 20, there is shown an exemplary user interface displaying various human-readable information relating to asset management like property descriptions consistent with one or more aspects of the present invention. In Figure 14, details associated with the electrical services, as selected by tenant in Figure 13, are displayed. These details may include, by way of examples, switch and plug repair, lighting repairs, troubleshooting, and tripped breakers. The interface has the provision of enabling the tenant to provide more information that are not included in the aforementioned details which are acting as subcategories. Request date and time are likewise depicted n Figure 14, along with a determination as to whether the issue to be reported or transaction to be requested is an emergency situation or requires immediate attention from the property management. It is to be understood and appreciated however that apart from emergency related needs, personal needs may also be covered by the app 120 of the present invention. These personal needs may include, but not limited to, haircut, massage, manicure, and pedicure. These personal needs may be provided through the herein disclosed workflow.

In Figure 15, the cost associated with the switch and plug repair work or subcategory under the electrical service category is displayed in response to the selection of the same subcategory in Figure 14 by the tenant. Specifically, the app 120 displays a confirmation page for the tenant to confirm the service he or she selected. In Figure 16, the app 120 displays a dialogue box containing the phrase “requesting services” in response to the affirmative confirmation of the tenant through the interface that is shown in Figure 15. In Figure 16, a successful transaction is depicted alone.

Advancing to Figure 18, there is shown a user interface which displays exemplary status information based on the successful transaction illustrated in Figure 16, and this interface is suitable for use in one or more implementations of the present invention. For each of the main categories (such as electrical and plumbing), the status information are displayed and are grouped into three, namely:“scheduling,”“servicing,” and“done.” These status information, among others, may be updated as the scheduled work with which they are associated develops towards fulfilment or completion.

The communication channel establisher 124 in the server computing device 102, when executed by the processor 110 from the memory system 112, may be configured to establish a communication channel through which communication sessions can be initiated from the user interface on the asset end-user computing device 104 for enabling data communications to and from the asset end-user computing device 104 based upon a communication protocol that can be selected from a plurality of communication protocols.

The communication sessions may include, by way of examples and not by way of limitation, SMS (short messaging service), chat or instant messaging, VoIP (Voice Over Internet Protocol), e-mail, and the like, effectively allowing a number of users to communicate with one another on a one-on-one, one-on-many, or many-on-many basis and in real-time or near real-time.

In one or more implementations and consistent with one or more aspects of the present invention, the herein disclosed plurality of communication protocols may include a 2G/3G/4G/5G (second/third/fourth/fifth/sixth generation of developments in wireless communication technology) network communication provided by mobile network operators, Wi-Fi™, Bluetooth™, and the like.

In one or more implementations, the communication channel establisher 124 may also be arranged to operate, in unicast or multicast manner, under any of the following communication protocols: (a) TCP/IP or Transmission Control Protocol /Internet Protocol; (b) RTMP or Real Time Messaging Protocol; (c) HTTP or Hypertext Transfer Protocol; (d) RSTP or Real Time Streaming Protocol; and (e) UDP or User Datagram Protocol.

Through the communication channel establisher 124, the user or property owner may also be enabled to have "one-touch" for processing transactions with which the content items are generally associated directly from the user interface of the asset end-user computing device 104, "tap to select" any of herein disclosed entities, "tap to visit" any of herein disclosed entities' websites, or "tap to obtain directions" to the physical location of any of the herein disclosed entities, as illustrated.

The system 100 may also include a service provider computing device 130 that can be used by a third party application service provider. The service provider computing device 130 is in communication with the server computing device 102 over the communication network 106, and may be characterized by computing devices operable by professional and/or skilled workers such as, by way of examples and not by way of limitation, plumbers, painters, cleaners, electricians, interior designers, carpenters, gardeners, mechanics, locksmiths, automobile drivers, healthcare professionals, therapists, personal assistants, baby sitters, and pest terminators. It is to be understood and appreciated that the herein disclosed service provider computing device 130 may, in addition or alternatively, be characterized by a social network server computing device, an e-commerce server computing device, a publisher server computing device, a payment gateway computer, and the like.

In some implementations, the server computing device 102 may be provided with access to data or software services maintained in the service provider computing device 130 through any suitable API. Software services or programs may be either installed, loaded, or otherwise operated on the server computing device 102 in such a manner that third party application data are shared by and between the server computing device 102 and the service provider computing device 130.

Consistent with one or more implementations and first system aspect of the present invention, the system 100 for publishing digital content over a network-based environment based on workflow-based asset management essentially comprises (a) the server computing device 102 including the database system 108 into and from which digital content items can be stored and retrieved, respectively; and (b) the asset end-user computing device 104 in communication with the server computing device 102 via the communication network 106 and operative to receive and publish the content items from the server computing device 102.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the content items include at least the first content item associated with the asset end-user computing device 104, the second content item associated with the asset manager computing device 126, the third content item associated with the service provider computing device 130, the fourth content item associated with the material source computing device 128, and the fifth content item associated with the merchant computing device 132.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 is configured to determine and store in the database system 108 the first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively. In the first system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 also includes the content publisher 118 which is configured to publish through the user interface on the asset end-user computing device 104 the second, third, fourth, and fifth content items in an order dependent on a plurality of workflow rules defined by the first content item as input data, the details of which shall be illustrated below. The input data may alternatively be provided by any one or more of the property manager, material source, service provider, and merchant such that, the role of the end-user may be substituted by the same. For example, a material request may originate from the property manager, and in this case the input data associated with the material request can serve as a trigger for the workflow manager 122 to define at least one flow and schedule a work based on this defined flow. In another example, the merchant may automatically offer items to any one of the property manager and the end-user through the app 120, and following the links to these items by property manager or the end-user may trigger a workflow to be initiated.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 may further include at least the content manager 120 and the workflow manager 122 in operative communication with one another. The content manager 120 may be in the form of downloadable application, or“app” for brevity. The“app” 120 may refer to software characterized by any suitable combination one or more computing modules, programs, processes, workloads, threads and/or a set of computing instructions that are executable by a computing system and that can interact with hardware components via procedures and interfaces supported by computer operating systems. The terms“content manager” and“app” may interchangeably be used throughout the herein disclosure.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the content manager 120 is configured (a) to categorize the content items according to at least one pre-determined parameter value arranged in the server computing device 102, (b) to configure the content items to be searchable according to at least one user-specified parameter value transmitted from the asset end-user computing device 104, and (c) to present the content items on the user interface on the asset end-user computing device 104.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 is configured (a) to store the plurality of workflow rules for processing the first, second, third, fourth, and fifth sets of transaction information, (b) to utilize a plurality of variables within one or more workflow rules of the plurality of workflow rules, each variable of the plurality of variables corresponds to a characteristic of one of the first, second, third, fourth, and fifth sets of transaction information, (c) to define at least one flow combining constants and the plurality of variables into the one or more workflow rules, and (d) to schedule at least one work based on the defined at least one flow. The scheduled at least one work may include workloads, priorities, and assignments.

In the first system aspect of the present invention, the server computing device 102 may also comprise the communication channel establisher 124 configured to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the asset end-user computing device 104 based upon one or more of a plurality of communication protocols. In the first system aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 may comprise a work management component 134 for executing work input and work output units 136, 138 defining the scheduled at least one work. References to these components are shown in Figure 1-A which is a block diagram illustrating further exemplary components of the server computing device 102 suitable for use in various aspects of the present invention as may be described herein.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 may comprise a flow management component 140 for executing search and circulation units 142, 144 defining, characterizing, and creating the scheduled at least one work.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 preferably further includes a storage manager 146 operably connected to the workflow manager 122. The storage manager 146 and the memory system 112 may be the same or different components of the server computing device 102. The storage manager 146 cooperates with the memory system 112 if they are arranged as different components, according to some implementations.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the storage manager 146 may be configured to store at least a work table 148, a flow table 150, a pointer table 152, a rules table 154, a transaction information table 156, and a content item table 158, any one or more of which are used by the workflow manager 122 to schedule the at least one work based on the input data.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 may further include a pointer manager 160 operably connected to each of the workflow manager 122 and the storage manager 146, according to some implementations.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the pointer manager 160 is configured to identify the address of each one of the first, second, third, fourth, and fifth content items in the storage manager 146 and/or in the memory system 112 as the case may be.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 may further include a conditional connection manager 162 operably connected to the workflow manager 122, to the storage manager 146, and to the pointer manager 160.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the conditional connection manager 162 is preferably configured to compare the constants and the plurality of variables to define the at least one flow based upon pointers employed by the pointer manager 160. The pointers are referencing the one or more workflow rules 176 as illustrated in greater detail in Figure 1-C.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 may further be configured to establish a plurality of subroutines. In some implementations, each subroutine of the plurality of subroutines may be represented as a separate logical section within the one or more workflow rules 176 referenced from different locations within the storage unit or memory system 112 of the server computing device 102 of the system 100 of the present invention.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 preferably navigates the one or more workflow rules 176 in response to the input data received by the server computer device 102 from the asset end-user computing device 104 over the data communication network 106 and via any associated, pre-determined communication protocols.

In the first system aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 may cause the content manager 120 to arrange presentation of the content items based on the defined at least one flow and scheduled at least one work. Each of the defined flow and the scheduled work is preferably referenced in the one or more workflow rules 176 of the plurality of workflow rules 176, according to some implementations.

It is noted that the layout and architecture of the tables which are illustrated in Figure 1-A which may be utilized in the implementations and aspects of the present invention may vary depending on design considerations and on the technology platforms in which the herein disclosed system is deployed, and the present invention is not limited in scope in that respect. For example, in Figure 1-G, which shows a block diagram illustrating an exemplary high-level enterprise architecture in accordance with one or more aspects of the present invention, the architecture may include the content manager or app 120 in operative communication with the database system 108 for handling repetitive tasks 164 via the app hosting backend services 166 having codes repository 166-A and API management component 168. Through the same API management component 168, files and images storage components 112-A from the memory system 112 may be managed by the records management components 170. In some implementations, the files and images may be subjected to and processed by a machine learning and artificial intelligence component 172 to generate any one or both of prediction data and/or recommendation data. The prediction and recommendation data may be stored in the database system 108, and shall be disclosed in greater details in the ensuing descriptions for Figures 6, 7, and 8.

Figure 1-C is a flow diagram illustrating an exemplary process for achieving a workflow based asset management. The workflow manager 122 accesses the rules table 154 (as shown in Figure 1-A) to cause the workflow rules editor 174 to generate the workflow rules 176 based on the rules which are updated through the workflow rules editor 174. In some implementations, the workflow rules editor 174 may be configured to receive preferred workflow rules 176 from the property manager. For example, in a case where the property manager receives a transaction request from the tenant to replace an air-conditioning unit that is no longer functioning, the property manager may configure a rule that if the damage causing the air-conditioning unit to stop functioning is due to natural wear-and-tear and/or negligence or mishandling of the tenant upon assessment of a competent professional or expert, then the cost of replacement is on the tenant. On the other hand, if the damage is caused by a previous repair concluded by a personnel approved by the property manager, then the cost of replacement is on the property management.

In some implementations, the workflow rules editor 174 may also be configured to receive continuous input data based on a pre-designed workflow. The workflow rules 176 may be editable or configurable using the workflow rules editor 174 in such a manner that the resulting rules dataset is updated accordingly and correspondingly. A workflow interface 178 may be coupled to the workflow rules 176 so as to enable two-way communication of the rules dataset which corresponds to the workflow rules 176 for processing, normalization, analysis, and integration.

In one implementation, the workflow interface 178 may be coupled to the work management component 134 for the manipulation of the schedule work attributes such as the workloads, priorities, and assignments. The work management component 134, in turn, may configured to interact with the flow management component 140 thereby effectively providing complete control interface for the workflow to be managed by the herein disclosed implementations and aspects of the present invention.

Alternatively, or additionally, the workflow interface 178 may be coupled to any third party system either directly or through a third party system interface for processing of metadata and integrating the metadata into the workflow rules 176 and the rules table 154. The third party system may be an external system that automatically communicates with the herein disclosed service computing device 102. For example, the third party system, which also be a built-in system in some preferred implementations, may be a system associated with the service provider, a system associated with the material source, a system associated with the merchant, a system associated with chatbot, a system associated with artificial intelligence enabled e- commerce provider, a system associated with QR (Quick Response) code integration, a system associated with RFID (Radio Frequency Identification) integration, systems associated with merchant integrations, a system associated with e-commerce management, and a system associated with data security or protection against data breach or threats, and a system associated with IoT (Internet of Things). In Figure 27, there is shown a user interface displaying an exemplary asset information page, in the form of announcement or infographics, suitable for use in one or more implementations of the present invention.

In some implementations, the workflow interface 178 may be coupled to the conditional connection manager 162, which in turn may be configured to process the workflow rules into and to which various constants and variables are combined and applied. The conditional connection manager 162 maybe arranged such that information stored in the memory system 112 are periodically updated based on pre-determined time intervals and based on the flow determined as a result of the interaction of the flow management component 140 and the conditional connection manager 162 which may be arranged to receive the rules dataset associated with the workflow rules 176 from the rules table 154.

It is to be understood and appreciated that the workflow interface 178 may further be coupled to various computing devices via external technology interface (not illustrated) and external technology interface (not illustrated). These devices may include, for example, printers, smart-phones, data centers, and the like.

Figure 1-E is a flow diagram which illustrates an exemplary process for determining a work based on a flow illustrated in the Figure 1-C. At step 180, the server computing device 102 receives user input data or input data from the tenant. At subsequent steps 182 and 184, respectively, the server computing device 102 searches a flow based on the input data and displays data based on the searched flow. At steps 186 and 188, the server computing device 102 receives further user input data, and consequently searches a further flow based on the further input data. The server computing device 102 may then connect the two flows which were searched based on the input data and further input data, as shown in step 190. The server computing device 102 responsively determines a work to be performed based on the connected flows as shown in step 192. If there is another input data which are distinct from the input data and the further input data as determined in decision step 194, the server computing device 102 searches another flow based on the searched another input data as shown in step 196. A loop may then be formed from step 190 to step 196 in one implementation. If there is no another input data, the process may be terminated, in one implementation.

The limiting factors associated with the physical structure of asset end-user computing device 104, which may include the its processing capabilities, the amount of its memory, and the sensitivity of its user gesture detection mechanisms, do not impose restrictions on operating said utilities since the aspects of the present invention effectively provide a single platform for accessing required information and functionalities by a user or tenant desiring to complete a particular task or work in the most efficient manner possible.

Referring to Figure 2, there is shown a flow diagram illustrating an exemplary routine for use in the system 100 of Figure 1 according to one or more implementations of the present invention. The routine starts at step 200-a when the server computing device 102 receives, processes and stores in the database system 108 a first content item from a first content source using a first content computer 202 over any suitable communication network, wherein the first content item is accompanied by a first transaction information. The first content source, in this example, may the service provider as described in Figure 1 in accordance with some implementations.

The first transaction information may correspond to the physical location or address information of the first content source which, by way of example, may be a plumbing service provider. The location information that are parametrically defined by the first transaction information may be where the first content source is conducting its business and/or offering its services.

At the same step 200-a, the server computing device 102 receives, processes and stores in the database system 108 a second content item from a second content source using a second content computer 204 over any suitable communication network, wherein the second content item is accompanied by a second transaction information. The second content source, in this example, may any one of the material sources as described in Figure 1 in accordance with some implementations.

The second transaction information may correspond to the physical address information of the second content source which, by way of example, a hardware store selling plumbing supplies and fixtures. The location information that are defined by the second transaction information may be where the second content source is conducting its business and/or offering its goods. The second content source may or may not be related to, or be affiliated with or not be affiliated with, the first content source.

In one preferred implementation, the routine continues at step 200-c when the server computing device 102 receives content item display request from the asset end-user computing device 104 over any suitable communication network. Receipt of the content item display request by the server computing device 102 from the asset end-user computing device 104 may trigger the routine to move to step 200-e in order to transmit request for a third transaction information associated with the asset end-user computing device 104.

The third transaction information may be based on a set of satellite position signals or an IP (Internet Protocol) address associated with asset end-user computing device 104. At step 200-g of the routine, the server computing device 102 receives, processes and stores in the database system 108 the third transaction information associated with the asset end-user computing device 104 operated by the tenant or end-user.

At step 200-i of the routine, the server computing device 102 arranges, in the database system 108, the first and second content items in an order that is dependent on a relationship defined by the first, second and third sets of transaction information associated with the first content item, the second content item, and the asset end-user computing device 104, respectively.

At step 200-k, the server computing device 102 publishes the arranged first and second content items on the asset end-user computing device 104 over the same communication network through which it previously received the content request from the asset end-user computing device 104, as initiated by the tenant.

The server computing device 108 may also be arranged to transmit data reports on, display dashboard on, and provide analytics to any one of the content source computers 202, 204, as indicated by steps 200-m. In the exemplary, non-limiting process described in Figure 2, the tenant may have direct access to any one of the service providers, the material sources, and the merchants which are either officially accredited with the property management office associated with the property of the tenant or pre-registered or pre-approved by the property manager operating the property manager app.

In this way, the herein disclosed implementations and aspects of the present invention effectively provides pre-qualified, reliable service providers, material sources, and merchants. Once the tenant selects service provider, material source, and/or merchant to that he or she wants to participate in resolving the issues he or she reported, the tenant may forward his or her selection(s) to the property manager through the content manager 120. This may simplify the consumption of the computing resources, as the property manager may save time in searching for such service providers, material sources, and/or merchants, and all the property manager has to do is to approve or deny, schedule a work based on the selection(s) of the tenant, and then monitor the progress of the work from the planning phase up to the fulfilment of the work, according to some implementations.

When a user registers to be able to gain access to and to effect the routine, certain anonymous demographic information from the end-user is collected by the server computing device 102 during the registration process. These demographic information may be aggregated by the server computing device 102 to establish user base characteristics and/or attributes, both of which may constitute the data reports, dashboard and analytics.

Referring to Figure 3, there is shown a flow diagram illustrating an exemplary process for use in the system of Figure 1 according to one or more implementations of the present invention. The process may be carried out by the server computing device 102 as described in Figure 1, and may commence at block 300 wherein the server computing device 102 receives request to publish contents from the asset end-user computing device 104.

At subsequent decision block 302, and consequently, the server computing device 102 determines whether it receives GPS (Global Position System) information from the asset end- user computing device 104. It is to be understood that other positioning systems may be utilized in various aspects of the present invention and in their respective implementations.

If the server computing device 102 receives the GPS information from the asset end- user computing device 104, the server computing device 102 immediately proceeds to processing the same GPS information as shown in block 304 and then to mapping a geographical area defined by the GPS information as shown in block 306, according to some implementations.

Otherwise, if the server computing device 102 does not receive the GPS information from the asset end-user computing device, the server computing device 102 may proceed with obtaining the IP (Internet Protocol) address of the asset end-user computing device 104 as shown in block 308 and then to mapping a geographical area defined by the IP address as shown in block 310.

Following any of the previous blocks 306 and 310 is block 312 wherein the server computing device 102 associates the mapped client geographical area to the geographical areas of the content sources. These geographical areas define the locations of the content sources which may include the service provider, the material source, and the merchant, among others.

At subsequent block 314, the server computing device 102 arranges the contents according to distance between the mapped end-user’s or tenant’s geographical area and each of the geographical areas of the content sources in accordance with one or more implementations and one or more aspects of the present invention.

At concluding block 316, the server computing device 102 publishes the arranged contents from the content sources on the user interface or display screen of the asset end-user computing device 104.

Referring now to Figure 4, there is shown a sequence diagram illustrating an exemplary operation for use in the system of Figure 1 according to one or more implementations and various aspects of the present invention. The operation may commence when the asset end- user computing device 104 transmits, preferably over the communication network 106, to the content manager 120 in the server computing device 102 content items display request, as viewed in the direction of the arrow 400. The content items contain transaction information.

In response to the receipt of the display request by the server computing device 102 from the asset end-user computing device 104, the content publisher 118 included in the server computing device 102 performs the arrangement of content items, as viewed in the direction of the arrow 402. The arrangement of the content items may be based on one or more processes described herein.

On the arrangement of the content items, the content manager 120 of the server computing device 102 renders the same content items based on workflow processing, as viewed in the direction of the arrow 404. The workflow processing through which the content items are viewable can be rendered on the user interface of the end-user computing device 104 as input control elements, navigational elements, informational elements, and containers, each of which may be implemented differently and depending on preferred configurations.

The formatted and arranged content items based on the workflow processing by the workflow manager 122 may be published on the asset end-user computing device 104 by the content publisher 118 of the server computing device 102, as viewed in the direction of the arrow 406.

The publication of the arranged content items in an different formats on the user interface of the asset end-user computing device 104 enables the end-user or tenant operating the asset end-user computing device 104 to view and select content items with accuracy.

While navigating through the user interface of the end-user computing device 104 by the tenant, the tenant may request for display of contact information associated with the source of the content item of his or her interest, as viewed in the direction of the arrow 408. The contact information may be used by the end-user to directly communicate with the source of the content items, which may be a service provider, a material source, or a merchant, or indirectly through any intermediate communication platform that is built-in in the app 120 (as illustrated in Figure 1). The intermediate communication platform may include a chat program, an e-email program, an SMS program, a web log program, a social media messaging program, a news group messaging program, a voice chat program, a video conference program, a phone calling program, or a VoIP (Voice over Internet Protocol) telephony program.

Once a communication session has been started (i.e., initialized) and then ended (i.e., terminated) by the end-user or tenant through the communication channel establisher 124, as viewed in the direction of the arrow 410, the content item from associated with the communication session may be displayed again on asset end-user computing device 104 of the system 100.

If the end-user desires to continue searching for other sources (perhaps because he or she is not satisfied with the first source he or she previously communicated with), the content manager 120 may be used again by the end-user for searching and communication purposes, as viewed in the direction of the arrow 412.

It is now apparent that aspects of the present invention provide a single platform for enabling completion of a particular sub-work (such as finding a service provider like a plumber who can fix broken pipes or like an electrician who can check on faulty wirings and electrical connections) involving operation of multiple applications without delay or interruption that can be caused by physical limitations of a miniaturized computer (such as the commonly small screen and minimum memory capacity of the asset end-user computing device).

Referring to Figure 5, there is shown an exemplary hardware architecture suitable for use in the system of Figure 1 according to one or more implementations of the present invention. The hardware architecture may include a system bus 500 that enables communication of a central processing unit 502, a main memory 504 containing an operating system, routines, and computer-executable instructions, and a storage interface 506 for storing an operating system, routines and instructions. In one or more preferred implementations, the system bus 500 may also enable communication of an external disk drive 508, input/output controller 510 connected with a keyboard 512, a pointing device 514, an audio 516 and a microphone 518, a display controller 520 connected with a display screen 522, and a network interface 524 for enabling data communication with other devices over the communication network 106.

In some implementations, one or more aspects directed to publishing of digital content of the present invention may be particularly useful in the field of advertising. When a business entity advertises on the platform associated with the system 100 and methods of the present invention, that business entity has options on how it may advertise its goods and/or services.

On one hand, the business entity may advertise using one or more aspects of the present invention for free. This "free advertising mechanism," however, may come with limited advertising features. Such features may include, by way of examples: (a) display of the advertising images and business names with limited window sizes; (b) the "advertising images" are not embedded with links to the websites of the advertisers thereby restricting user interactions and interactional tracking; and (c) the exact locations or addresses of the business entities acting as advertisers are undisclosed to the end-users.

On the other end of the scale, the business entity— which may be any of the service provider, the material source, and the merchant— may advertise using the one or more aspects of the present invention for a fee. As opposed to the optional "free advertising mechanism," the "paid advertising mechanism" may come with added-value features, functions, abilities and metrics for the advertising entities. Such features may include (i) selecting advertising image size displayable on the user interface of the asset end-user computing device 104; (ii) providing content-based advertising based on various metrics and parameters; and (iii) tracking advertisement impressions, among many others known in the field.

To access one or more features of the one or more aspects of the present invention, its advertising program must first load on the mobile or web application desktop screen ensuring that advertising impressions are constantly generated and logged on the back-end for the benefit of advertisers, prior to end-users being able to use the features of the content manager 120.

Aside from the core content item search function, aspects of the present invention may also be arranged to provide end-users with additional features designed to maintain or increase engagement levels, including games, puzzles, and other interactive multimedia tools.

A second system aspect of the present invention is directed to a system for publishing digital content over a network-based environment based on prediction model based asset management. The second system aspect of the present invention mainly includes (a) a server computing device 102 including a database system 108 into and from which digital content items can be stored and retrieved, respectively, and (b) an asset end-user computing device 104 in communication with the server computing device 102 via a communication network 106 and operative to receive and publish the content items from the server computing device 102.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the content items includes at least a first content item associated with an asset end-user computing device 104, a second content item associated with an asset manager computing device 126, a third content item associated with a service provider computing device 130, a fourth content item associated with a material source computing device 128, and a fifth content item associated with a merchant computing device 132.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 is configured to determine and store in the database system 108 first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 includes a content publisher 118 that is configured to publish through a user interface or display screen on the asset end-user computing device 104 the second, third, fourth, and fifth content items based on the first content item as input data from the end-user or the tenant.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 further includes a content manager 120, a prediction model manager 600, and a communication channel establisher 124. The prediction model manager 600 shall be described in greater in detail in the ensuing description for Figure 6.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the content manager 120 is configured to categorize the content items according to at least one pre-determined parameter value arranged in the server computing device 102. These pre-determined parameter value may be the same as those illustrated for the first system aspect of the present invention and consistent with one or more preferred implementations thereof.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the content manager 120 is configured to configure the content items to be searchable according to at least one user-specified parameter value transmitted from the asset end-user computing device 104.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the content manager 120 is configured to present the content items, of various data types, on the user interface on the asset end-user computing device 104.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the prediction model manager 600 is configured to model the content items using at least one prediction model 602, 608. A plurality of prediction models 602, 608 may be desirable depending on requirements.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the prediction model manager 600 is configured to receive from the asset end-user computing device 104 the input data relevant to relevant to the first transaction.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the prediction model manager 600 is configured to inject the input data into the prediction model 602. In the second system aspect of the present invention, and consistent with one or more implementations thereof, the prediction model manager 600 is configured to generate a plurality of user-selectable attributes based on the injected input data (e.g., the end-user’s input data which are injected to the prediction model 602).

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 further includes a communication channel establisher 124 configured to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the asset end-user computing device 104 based upon one or more of a plurality of communication protocols thereby causing to be outputted on the asset end-user computing device 104, or on the user interface thereof, the generated plurality of user- selectable attributes, among others.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the prediction model 602 is preferably updated instantaneously at a plurality of times based on the input data from the asset end-user computing device 104.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the prediction model manager 600 preferably includes a recommendation generator component (not illustrated) configured to determine at least one set of recommendation data based at least in part on one or more selected attributes from plurality of user-selectable attributes. In some implementations, the one or more selectable attributes are real estate property management attributes which may include objects and properties associated with contracts, documents (technical documents, documentations, guides), maintenance (planning, records, resources), and others. Data generated through the use of the herein disclosed system 100 may be retrieved for planning purposes. These data are preferably backed up in case of power interruption and other unforeseen events which could affect the flow of the herein disclosed transaction(s). Incomplete transactions, which may occur when Internet connection becomes suddenly unavailable or fails for example, may be arranged to be stored and retrieved later for completion when the Internet connection is restored.

In Figure 22, there is shown a set of user interfaces generally displaying an exemplary management of various facilities associated with an asset through the system illustrated in Figure 1 and consistent with one or more aspects of the present invention, and particular displaying records of data which are related to facilities and facility management procedures. In Figure 23, there is shown a user interface displaying an exemplary management of an issue associated with a facility through the system illustrated in Figure 1 and consistent with one or more aspects of the present invention. For example, one way to manage issues to tag whether each one of these recorded issues is either active or resolved.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 is implemented as part of a network- based enterprise for processing the real estate property management attributes. It is to be understood and appreciated that any one or more components of the present invention may be upgraded.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, a plurality of asset owners may correspond to respective asset owner accounts of the network-based enterprise, and wherein the input data are received from the asset end-user computing device 104 for an asset owner account corresponding to an asset owner operating the asset end-user computing device 104.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the at least one set of recommendation data is associated with the asset owner account operating the asset end-user computing device 104. In Figure 25, there is shown a user interface displaying an exemplary centralized management of account information associated with the asset owner account through the system illustrated in Figure 1 and consistent with one or more aspects of the present invention.

In the second system aspect of the present invention, and consistent with one or more implementations thereof, the input data which are transmitted to the server computing device 102 may include metadata from each of metadata files associated with at least one of the asset owner account and the asset end-user computing device 104. The metadata may contain third- party information.

Referring now to Figure 6 and taken in conjunction with the components illustrated in Figure 1, there is shown a data flow diagram which illustrates an exemplary operation associated with the prediction model 602, 608 executing on the environment of Figure 1 in accordance with one or more implementations of the present invention, and as the input data are provided by the tenant.

The flow commences with the processing of the first instance of input data (block 604) such that the same first instance of input data is injected into the first prediction model 602 based on the end-user’s preferences.

In one or more implementations, the aspects of the present invention may further comprise changing, by the server computing device 102, one or more input data of the plurality of input data over time as the end-user computing device 104 supplies more user-selected attributes which are indicative of the end-user’s preferences. These preferences may result in the generation of the second instance of input data (block 606) which is distinct from the first instance of input data.

In some implementations, the prediction model 602, 608 may be updated instantaneously at a plurality of times based on the updated input data from the end-user. For example, one or more intelligent components of the prediction model 602 may be arranged such that when a camera, which may be installed inside a condominium unit and configured to be working with the system 100 of the present invention, captures an image of a cockroach for a number of times over a predetermined cycle, then the app 120 may be arranged to recommend to the end-user various roach killers such as those in the form of sprays, traps, and even powders. In another implementation, if the end-user reports recurring issues or problems which are substantially the same for a certain period, the app 120 may be arranged to recommend to the end-user items that are relevant in resolving the same issues or problems.

The second instance of input data may then be injected into the first prediction model 602, resulting in the generation of the second prediction model 608 and consequently in the generation of the updated predicted data (block 610). The optimized output data may be outputted on an output unit or display screen of the end-user computing device 104 by the server computing device 102 via the data communications network 106.

The predicted data may include various parameter values such as those defined by the transaction request data. The prediction model 602, 608 may treat these transaction request data as input data that can be injected into it using various means and from any appropriate data sources or from any third-party platforms.

One or more implementations of the present invention may be directed to the manner of detecting a unique transaction request according to an example environment as illustrated in Figure 1. Initially, transaction data based on the transaction request is received from the end- user computing device 104 (e.g., via the prediction model manager 600 server computing device 102). For example, the server computing device 102 may be configured to receive data about the transaction (e.g., data load type, a selection from a pre-determined list of service providers, and service fees in each of the service providers included in the list) from the end- user computing device 104 which is arranged to submit to the server computing device 102 the same transaction for processing and analysis.

In an example environment, the server computing device 102 receives the transaction data in response to the pre-defined data ingestion or injection events, such as transmission of the data load type, a selection from a pre-determined list of service providers, and service fees in each of the service providers included in the list, as well as payment transaction data such as credit card transaction data or debit card transaction data to name a few (e.g., date of material purchase, time of the material purchase, location of the material purchase, material source identifier or ID, account number to be used to enable fulfilment of the material purchase, and amount corresponding to the material purchase).

In an example environment, the data types and the prediction model 602, 608 may include one or more rules 176 that cause a transaction request originating from the end-user computing device 104 via the data communications network 106 to be initiated based on data ingestion and/or injection events and the type of various data received, and the model may pass at least some of the transaction data to the prediction model 602, 608 which is pre-loaded with various information associated with the entire ecosystem such as the location of each of the service providers, the prices of various types of materials from each of the material sources, merchants, and/or service providers, and any discount information and/or promotional offer information.

The response time requirements for the corresponding service level agreement (SLA) may be obtained via, by way of examples and not by way of limitations, the prediction models 602, 608, the server computing device 102, and any one or more of the service provider computing device 130, the material source computing device 128, and the merchant computing device 132, all in accordance with one or more implementations of the present invention.

In an example environment, the transaction input data may be passed from data types and the prediction models 602, 608 to a service level agreement model that uses the data to determine response time requirements defined in the service level agreement corresponding to the transaction (e.g., the service level agreement with the end-user operating the end-user computing device 104 and requesting the transaction to be confirmed by the property manager). The response time requirements may be returned to any one or more of the first and second prediction models 602, 608 for processing.

In an example environment, the response time requirements may be subject to contextual qualifications (e.g., type of transaction, pre-registered or new end-user computing device 104, pre-registered or new transaction account, international versus domestic transaction, or the like) and/or time/date qualifications (e.g. day of the week or month, time of day, holiday, or the like). In an example environment, risk models specified in the service level agreement may be obtained (e.g., via a service level agreement model) along with response time requirements. For example, a pre-defined list of risk models that may be used for the transaction may be obtained based on the transaction data from the instances of data 604, 606 and/or from associated metadata.

The current system load and resource availability may be obtained via a system load monitoring module (not illustrated) included in the server computing device 102. For example, the system load monitoring module may provide information on the current utilization of system resources, such as memory and processor utilization and utilization of other system resources such as network, disk storage, disk I/O, disk mirroring device, database, or the like. Thus, the system load monitoring module may also provide a real time capacity assessment, such as an indication of available memory capacity and/or available processor capacity owing to the expected volume of unprocessed data from the system 100.

In an example environment, the current utilization and/or capacity of the system resource may be obtained by the system load monitoring module using an agent on the runtime environment. In an example environment, the information may be passed by the system load monitoring module to at least one of the prediction models 602, 608 executing on the server computing device 102.

In some implementations, the analytics execution projection information for a plurality of risk models are obtained (e.g., via the prediction models 602, 608 and the server computing device 102). For example, projected or estimated execution times may be obtained for each of the data analytics defined in the SLA. In an example environment, the projected or estimated execution times may be determined based on analytics statistics history (e.g., via an analytics execution history model). The amount of system resources consumed by each of the analytics may also be obtained in this regard.

A data and/or transaction analytics may be selected from a plurality of available analytics based on SLA response time requirements, current system load and resource availability, and analytics execution, reconstruction, and projection information via at least one of the prediction models 602, 608 and the server computing device 102 of the present invention.

In an example environment, the prediction models 602, 608 may be configured to select an optimal data analytics, e.g., the most accurate and complete, and/or the least risky transaction type, and/or one that is compatible with the available transaction data, any or all of which may have to be projected, reconstructed or estimated based on execution time and in compliance with SLA response time requirements, and further may have to use no more than currently available system resources allocated for the server computing device 102 which may be a configurable application server computing device,“application server,” or any similar computing device. For example, if there is an adequate amount of data and computing resources available to utilize a deterministic analytic, a scoring analytic, or a regression analytic, and the projected execution time for each analytic complies with SLA response time requirements, the present invention environments may select the regression analytic to provide a more accurate and complete data analysis, processing and presentation on the display unit, display screen, or user interface of the end-user computing device 104.

The transaction may be analysed, processed and consequently presented in a human- readable format using at least one of the prediction models 602, 608 and the server computing device 102. In an example environment, using at least one of the prediction models 602, 608 to cause the analysis, processing, and presentation of the transaction data may result in a risk score. The risk score may be a quantitative value (e.g., a numerical score) and/or a qualitative value (e.g., low risk, medium risk, or high risk) depending on the result of the analysis and/or on any desired configuration.

In some implementations, the plurality of input data including the transaction data initiated by the tenant may be analyzed by the server computing device 102 to generate the risk score. The results of the transaction data analysis may be injected into at least one of the prediction models 602, 608 and the server computing device 102. For example, the risk score (which may be quantitative or qualitative or both quantitative and qualitative) may be posted the server computing device 102.

In another example environment, the risk score may be reported to the end-user computing device 104 that originally caused the submission of the transaction data. In yet another example environment, an electronic process may be initiated to cancel an electronic transaction on the basis of the results of the analyzed data, according to some implementations.

A flow diagram illustrating an exemplary process for optimizing prediction models and particularly some model parameters used in such models in accordance with one or more implementations of the present invention is illustrated in Figure 7. Simply put, this flow diagram of Figure 7 presents a flowchart of a process or routine for refining model parameters in the prediction model 602, 608. In some preferred implementations, Figure 7 may also illustrate computer-based operations representing procedures for updating at least one of the prediction models 602, 608 using input and model datasets.

As depicted, routine begins by collecting and receiving by the server computing device 102 the input data which are generated for the tenant supplied series of input data, and these data are treated by the prediction model manager 600 as parameter sets. At a later phase, the routine processes these input data, and from which the output data are produced using at least one of the prediction models 602, 608 and its parametric values. By these, a set of model parameters used by the prediction models 602, 608 can be refined to improve the prediction model's ability to provide the end-user with the accurate and reliable information for future transactions.

In the depicted routine, the process begins with the operation 700 where sets of process parameters or current input data are selected for use in both the computational phases. The input data may include, by way of examples and not by way of limitations, a request to search for any of the service provider, material source, and merchant as shown in an example screenshot of the end-user computing device 104 in various accompanying Figures, and the threshold distance (e.g., 500 meters as inputted by the end-user) in terms of finding any one or more of the service provider, the material source, and the merchant within a certain radius relative to the physical location of the end-user computing device 104.

These exemplary input data which may be treated as parameters may define a range of conditions over which the optimization is conducted. Each set of process parameters represents a collection of settings for operating the prediction model manager 600 which may be executing directly on the end-user computing device 104 on indirectly from the server computing device

102

As mentioned, examples of such parameters include the search request and other parameters that can be selected and/or measured within the pre-registered and pre-validated list of service providers, material sources, and merchants. Alternatively, or in addition, each set of process parameters may represent a promotional offer or discount information from any one or more of the participating service providers, material sources, merchants, e-commerce service providers, and the like.

After selecting the input data as sets of parameters, the data processing begins. This is depicted by a loop over multiple parameter sets and includes operations 702, 704, and 706. Operation 702 simply represents incrementing to the current dataset based on the selected or received current input data in operation 700. Once the prediction model 602, 608 is updated using the current input data (operation 704), the routine runs a decision algorithm (operation 706) using the parameters of the current input data, i.e., whether there is an additional dataset. If there is an additional dataset, the routine moves back to operation 702 to increment the current dataset. Otherwise, the routine may advance to operation 708 wherein model input data are initialized based on the current dataset and, if applicable, incremented dataset as determined in decision operation 706.

Put differently, each time a new dataset is used in data processing by the server computing device 102, the routine may determine whether there are any more datasets to consider, as once again illustrated in the decision operation 706. If there are additional datasets, the next datasets are initiated. Ultimately, after all the initially defined datasets are considered, the decision operation 706 may be arranged to determine that there are no more datasets to consider. At this point, the routine progresses to the modelling of the received dataset(s) based on model for prediction. This may result in an intelligent treatment of input data from the end- user.

In some implementations, initially in the model for prediction part of the flow, a set of model datasets is once again initiated as illustrated in operation 708. As explained, these model datasets are parameters that the model uses to optimize and/or predict a substantial portion, if not all, of the data relevant to the herein disclosed one or more transaction information. In the context of this process flow, these model datasets are modified to improve the accuracy of the prediction model 602, 608. In some embodiments, the model datasets are similar datasets from the server computing device 102 representing one or more previous transactions that took place using the end-user computing device 104 and/or different applications as the case may be.

Also, as may be explained herein, the prediction model 602, 608 may employ other parameters that remain fixed during the prediction routine. Examples of such parameters include physical parameters such as, by way of example and not by way of limitation, hazard prone areas based on the surface or structural characteristics of the asset such as a condominium with or without the margin of error and/or order of approximation.

After the model datasets are initialized at operation 708, the routine may enter a further loop. Initially in this further loop, the routine increments to a next one of the model datasets (operation 710) that were initially set in operation 308. With this selected model dataset, the routine runs the at least one of the prediction models 602, 608 using a combination of the current dataset and the model dataset. Consequently, the routine may be configured to generate an updated prediction model using the model dataset as shown in operation 712.

Ultimately all the datasets may be arranged to be considered in the further loop. Before that point however, a further decision block 714 may determine whether there are additional model datasets. If so, the model dataset is incremented to the next model dataset as shown in operation 710. The process of running the prediction model and generating output data may be arranged to be repeated for each of the current and model datasets which are combined together as shown in operation 710. These output data may be presented on the display unit of the end- user computing device 104 where the current and/or model datasets is displayed in a listing format. This list may be sorted according to distance, price, relevance, recommendation scores from third-party end-users, and/or promotional offer information from various sources which are normally third-party sources.

When there are no remaining model datasets to be considered for the model parameters currently under consideration, the routine may exit the further loop and calculates an error, if any, on the updated prediction model using the current and model datasets as shown in operation 716. In some preferred implementations, the error may be determined across any one or all of the current datasets and as well as the model datasets.

If there is no error determined in the decision operation 718, the updated prediction model 602, 608 may be executed as shown in operation 720. Otherwise, a new model dataset may be generated as shown in operation 722 to minimize the impact of the determined error at the very least if the same cannot be eliminated. In which case, the new model dataset may be taken into consideration for processing in operation 710.

In some implementations, the output data generated based on the current dataset and/or model datasets which are processed by the server computing device 102 using the prediction model 602, 608 may include a geographical map on which the location of the end-user selected service provider, material source, or merchant can be found and as well as an electronic direction or navigation assistance for display on the end-user computing device 104.

In some implementations, the input data may include the quantity of the item to be purchased. The input data may further include the dealer or merchant identifier of the end-user selected merchant, and the dealer or merchant identifier may be in the form of QR code that may be scanned for enabling a payment transaction to be initiated, processed, and subsequently completed.

In some implementations, most notably in the method aspects of the present invention, the payment transaction may be encoded with the selected service provider, material source, or merchant identifier indicating the selected service provider, material source, or merchant, the account information indicating the financial account of the selected service provider, material source, or merchant with the financial institution, and the routing information indicating the payment network performing the electronic transfer of funds (from the financial account of the end-user operating the end-user computing device 104 to the financial account of the selected service provider, material source, or merchant based on the successful purchase transaction initiated by the end-user.

In some implementations, end-users linked to one particular end-user may be identified. As noted above, various different associations, interactions, or commonalties with others who may be linked to user accounts can be evaluated to identify linked users. In at least some embodiments, social media associations (contacts or“friends” lists) may be used to identify linked users. User parameter vectors and item parameter vectors for the linked users may be obtained (for those linked users that have subsequently selected items since the item recommendation model was generated). Then, respective item recommendations may be generated for individual ones of the linked users based on user-specific updates calculated for the linked users from the respective user parameter vectors and item parameter vectors obtained.

It is to be understood and appreciated that the item recommendations may be generated for linked end-users that have not subsequently selected items after the generation of the item recommendation model. Item recommendations for these linked users may be generated using the respectively maintained user parameter vectors of these linked users. Once the different sets of item recommendations per user are generated, the item recommendations may be compared in order to select commonly recommended items to provide as recommendations for the end-users. If, for instance, 3 out of 5 linked users had an item recommended for them, then the particular item may be provided as an item recommendation (even if it was not generated based on the end-user's model information alone). Various different schemes for weighting or indicating commonality between recommendations, and thus the previous example is not intended to be limiting.

Referring to Figure 8, there is shown a block diagram which generally illustrates an exemplary prediction model based on matrix factorization in accordance with one or more implementations of the present invention. The block diagram in Figure 8 particularly illustrates a content item recommendation model based on matrix factorization, according to some implementations.

In some implementations, content item recommendations for a particular end-user may be determined based on an arrangement of the first, second, third, fourth, and fifth content items by the content item recommendation model (not illustrated) generated through a singular value decomposition (SVD) 800 of at least one single matrix, consistent with the prediction model manager 600 of the present invention.

The single matrix may represent content item selections between end-users and the content items. The content items may be a tangible product from merchants or an intangible service from service providers.

A user-specific update for a particular end-user may be generated in real-time to be used for making content item recommendations. Figure 8 may also be regarded as a high-level flowchart illustrating methods and techniques for determining one or more content item recommendations based on a content item recommendation model generated from matrix factorization, according to some implementations.

Content item selection data may maintain, in various implementations, a single matrix describing end-user selections with regard to content items as input data, matrix“A.” This content item selection matrix may be represented as Ae{ 1,2,3} m *n where m is the number of end-users and n is the number of the content items, resulting in the prediction data from which recommendation data may be derived.

The prediction model manager 600 may implement a component that can be configured to perform the SVD 800 on content item selection data, i.e., matrix“A”, in order to generate the content item recommendation that may be dependent upon any one or more of the relationships of the first, second, third, fourth, and fifth content items, according to some implementations.

It is to be understood and appreciated that the SVD 800 of a matrix“A” is the factorization of“A” into the product of three matrices“A” = UDV T where the columns of U and V are orthonormal and the matrix“D” (shown by the dots) is diagonal with positive real entries. Computational models associated with the SVD 800 may be derived from Chapter 4 of the book entitled“Computer Science Theory for the Information Age,” authored by John Hopcroft and Ravi Kannan, and published on 18 January 2012. An online reference to this book can be found through the following URL: https://www.cs.cmu.edu/~venkatg/teaching/CStheory-infoage/ho pcroft-kannan-feb2012.pdf. Chapter IV of the same book can be found through the following URL: https://www.cs.cmu.edu/~venkatg/teaching/CStheory-infoage/bo ok-chapter-4.pdf. The full disclosure in Chapter IV of this book by Hopcroft and Kannan (2012) is hereby incorporated by reference in its entirety.

The provision of herein disclosed single platform for searching for relevant content items, preferably through the prediction model 602, 608 of the prediction model manager 600, and also for communicating with sources (e.g., asset manager computing device 126, service provider computing device 130, material source computing device 128, and merchant computing device 132) of the content items ensures that the utilities and services associated with asset management, such as in managing various kinds of utilities and issues in a condominium building, are readily accessible to the end-user on a typically small screen size of the asset end-user computing device 104 and can be operated desirably.

The provision of the same single platform also ensures that the rather limited memory of the asset end-user computing device 104 can manage the computing processes associated with the abovementioned utilities and services, and that an unintended switching from one application to another in order for the user to locate and make use of these utilities and services can be prevented.

In a further aspect of the present invention, there is disclosed a computer-implemented method operating in a system comprising a server computing device 102 including a database system 108 into and from which digital content items can be stored and retrieved, respectively, and an asset end-user computing device 104 in communication with the server computing device 102 via a communication network 106 and operative to receive and publish the content items from the server computing device 102, the content items including at least a first content item associated with an asset end-user computing device 104, a second content item associated with an asset manager computing device 126, a third content item associated with a service provider computing device 130, a fourth content item associated with a material source computing device 128, and a fifth content item associated with a merchant computing device 132. This computer-implemented method of publishing digital content over a network-based environment based on workflow-based asset management comprises various steps.

The first step of the first method aspect of the present invention is characterized by determining and storing in the database system 108, by the server computing device 102, first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively.

The second step of the first method aspect of the present invention is characterized by publishing, by a content publisher 118 included in the server computing device 102, through a user interface on the asset end-user computing device 104 the second, third, fourth, and fifth content items in an order dependent on a plurality of workflow rules, as disclosed and described herein, defined by the first content item as input data.

The third step of the first method aspect of the present invention is characterized by categorizing, by the content manager 120 that is included and operable in the server computing device 102, the content items according to at least one pre-determined parameter value arranged in the server computing device 102.

The fourth step of the first method aspect of the present invention is characterized by configuring, by the content manager 120 included in the service computing device 102, the content items to be searchable according to at least one user-specified parameter value transmitted from the asset end-user computing device 104.

The fifth step of the first method aspect of the present invention is characterized by presenting, by the content manager 120, the content items on the user interface on the asset end-user computing device 104.

The sixth step of the first method aspect of the present invention is characterized by storing, by a workflow manager 122 included in the server computing device 102, the plurality of workflow rules 176 for processing the first, second, third, fourth, and fifth sets of transaction information, consistent with various implementations.

The seventh step of the first method aspect of the present invention is characterized by utilizing, by the workflow manager 122, a plurality of variables within one or more workflow rules 176 of the plurality of workflow rules 176, each variable of the plurality of variables corresponds to a characteristic of one of the first, second, third, fourth, and fifth sets of transaction information.

The eight step of the first method aspect of the present invention is characterized by defining, by the workflow manager 122, at least one flow combining constants and the plurality of variables into the one or more workflow rules 176 of the plurality of workflow rules.

The ninth step of the first method aspect of the present invention is characterized by scheduling, by the workflow manager 122, at least one work based on the defined at least one flow, the scheduled at least one work including workloads, priorities, and assignments. The tenth step of the first method aspect of the present invention is characterized by configuring, by a communication channel establisher 124 included in the server computing device 102, to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the asset end-user computing device 104 based upon one or more of a plurality of communication protocols.

Consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of executing, by a work management component 134 of the workflow manager 122, work input and work output units defining and characterizing the scheduled at least one work.

Consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of executing, by a flow management component 140 of the workflow manager 122, search and circulation units 142, 144 defining the scheduled at least one work.

In the first method aspect of the present invention, the server computing device 102 further includes a storage manager 146 operably connected to the workflow manager 122. Consequently, and consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of storing, by the storage manager 146, at least a work table 148, a flow table 150, a pointer table 152, a rules table 154, a transaction information table 156, and a content item table 158, any one or more of which are used by the workflow manager 122 to characterize and schedule the at least one work.

In the first method aspect of the present invention, the server computing device 102 further includes a pointer manager 160 operably connected to each of the workflow manager 122 and the storage manager 146. Consequently, and consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of identifying, by the pointer manager 160, the address of each one of the first, second, third, fourth, and fifth content items in the storage manager 146 and in turn in the memory system 112.

In the first method aspect of the present invention, the server computing device 102 further includes a conditional connection manager 162 operably connected to the workflow, storage, and pointer managers 122, 146, 160. Consequently, and consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of comparing, by the conditional connection manager 162, the constants and the plurality of variables to define the at least one flow based upon pointers employed by the pointer manager 160, the pointers referencing the one or more workflow rules 176.

Consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of establishing, by the workflow manager 122, a plurality of subroutines, each subroutine of the plurality of subroutines being represented as a separate logical section within the one or more workflow rules 176 referenced from different locations within the storage unit or the memory system 112. Consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of navigating, by the workflow manager 122, the one or more workflow rules 176 in response to the input data received by the server computer device 102 from the asset end-user computing device 104 over the communication network 106.

Consistent with one or more implementations of the present invention, the first method aspect of the present invention further comprises the step of causing, by the workflow manager 122, the content manager 120 to arrange presentation of the content items based on the defined at least one flow and scheduled at least one work, each being referenced in the one or more workflow rules 176 of the plurality of workflow rules 176.

A second method aspect of the present invention is directed to a computer-implemented method of publishing digital content over a network-based environment based on prediction model based asset management. The second method aspect of the present invention preferably operates in a system comprising a server computing device 102 including a database system 108 into and from which digital content items can be stored and retrieved, respectively, and an asset end-user computing device 104 in communication with the server computing device 102 via a communication network 106 and operative to receive and publish the content items from the server computing device 102, the content items including at least a first content item associated with the asset end-user computing device 104, a second content item associated with an asset manager computing device 126, a third content item associated with a service provider computing device 130, a fourth content item associated with a material source computing device 128, and as well as a fifth content item associated with a merchant computing device 132.

The first step of the second method aspect of the present invention is characterized by determining and storing in the database system 108, by the server computing device 102, first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively.

The second step of the second method aspect of the present invention is characterized by publishing, by a content publisher 118 included in the server computing device 102, through a user interface on the asset end-user computing device 104 the second, third, fourth, and fifth content items based on the first content item as input data.

The third step of the second method aspect of the present invention is characterized by configuring, by the content manager 120, the content items to be searchable according to at least one user-specified parameter value transmitted from the asset end-user computing device 104 in accordance with the herein disclosed implementations or embodiments.

The fourth step of the second method aspect of the present invention is characterized by presenting, by the content manager 120, the content items on the user interface on the asset end-user computing device 104.

The fifth step of the second method aspect of the present invention is characterized by modelling, by a prediction model manager 600 included in the server computing device 102, the content items using at least one prediction model 602, 608. Two or more prediction models 602, 608 are also preferable to increase accuracy of the resulting prediction data or output data. The sixth step of the second method aspect of the present invention is characterized by receiving, by the prediction model manager 600 which uses any one or more of the prediction models 602, 608, from the asset end-user computing device 104 the input data relevant to relevant to the first transaction, among others.

The seventh step of the second method aspect of the present invention is characterized by injecting, by the prediction model manager 600, the input data into the prediction model 602, 608, consistent with one or more implementations of the present invention.

The eight step of the second method aspect of the present invention is characterized by generating, by the prediction model manager 600, a plurality of user-selectable attributes based on the injected input data.

The ninth step of the second method aspect of the present invention is characterized by establishing, by a communication channel establisher 124 included in the server computing device 102, at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the asset end-user computing device 104 based upon one or more of a plurality of communication protocols thereby causing to be outputted on the asset end-user computing device 104 the generated plurality of user-selectable attributes.

Consistent with one or more implementations of the present invention, the second method aspect of the present invention further comprises the step of updating, by the prediction model manager 600, the prediction model 602, 608 instantaneously at a plurality of times based on the input data originating or sent from the asset end-user computing device 104.

Consistent with one or more implementations of the present invention, the second method aspect of the present invention further comprises the step of determining, by a recommendation generator component included in the prediction model manager 600, at least one set of recommendation data based at least in part on one or more selected attributes from plurality of user-selectable attributes. The one or more selectable attributes are real estate property management attributes.

In the second method aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 is implemented as part of a network- based enterprise for processing the real estate property management attributes.

In the second method aspect of the present invention, and consistent with one or more implementations thereof, a plurality of asset owners correspond to respective asset owner accounts of the network-based enterprise, and wherein the input data are received from the asset end-user computing device 104 for an asset owner account corresponding to an asset owner operating the asset end-user computing device 104.

In the second method aspect of the present invention, and consistent with one or more implementations thereof, the at least one set of recommendation data is associated with the asset owner account operating the asset end-user computing device 104.

In the second method aspect of the present invention, and consistent with one or more implementations thereof, the input data transmitted to the server computing device 102 include metadata from each of metadata files associated with at least one of the asset owner account and the asset end-user computing device 104.

Yet another aspect of the present invention is directed to a non-transitory, computer- readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement a system for publishing digital content over a network-based environment based on workflow based asset management.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the system comprises (a) a server computing device 102 including a database system 108 into and from which digital content items can be stored and retrieved, respectively, and (b) an asset end-user computing device 104 in communication with the server computing device 102 via a communication network 106 and operative to receive and publish the content items from the server computing device 102.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the content items preferably and operably includes at least a first content item associated with an asset end-user computing device 104, a second content item associated with an asset manager computing device 126, a third content item associated with a service provider computing device 130, a fourth content item associated with a material source computing device 128, and a fifth content item associated with a merchant computing device 132.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 is configured to determine and store in the database system 108 first, second, third, fourth, and fifth sets of transaction information associated with the first, second, third, fourth, and fifth content items, respectively, consistent with herein implementations.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 includes a content publisher 118 configured to publish through a user interface on the asset end-user computing device 104 the second, third, fourth, and fifth content items in an order dependent on a plurality of workflow rules 176 defined and likewise characterized by the first content item as input data.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, a content manager 120 included in the server computing device 102 is preferably configured (a) to categorize the content items according to at least one pre-determined parameter value arranged in the server computing device 102, (b) to configure the content items to be searchable according to at least one user-specified parameter value transmitted from the asset end-user computing device 104, and (c) to present the content items on the user interface on the asset end-user computing device 104.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, a workflow manager 122 included in the server computing device 102 is preferably configured (a) to store the plurality of workflow rules 176 for processing the first, second, third, fourth, and fifth sets of transaction information, (b) to utilize a plurality of variables within one or more workflow rules 176 of the plurality of workflow rules 176, each variable of the plurality of variables corresponds to a characteristic or attribute of one of the first, second, third, fourth, and fifth sets of transaction information, (c) to define at least one flow combining constants and the plurality of variables into the one or more workflow rules 176, and (d) to schedule at least one work based on the defined at least one flow, wherein the scheduled at least one work includes workloads, priorities, and assignments.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, a communication channel establisher 124 included in the server computing device 102 is preferably configured to establish at least one communication channel through which one or more communication sessions can be initiated from the user interface for enabling data communications to and from the asset end-user computing device 104 based upon one or more of a plurality of communication protocols. Consistent with aspects of the present invention, the communication channel is preferably associated with the communication network 106.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 comprises a work management component 134 for executing work input and work output units 136, 138 defining the scheduled at least one work.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 comprises a flow management component 140 for executing search and circulation units 142, 144 defining or characterizing the scheduled at least one work.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 further includes a storage manager 146 operably connected to the workflow manager 122.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the storage manager 146 is configured to store at least a work table 148, a flow table 150, a pointer table 152, a rules table 154, a transaction information table 156, and a content item table 158, any one or more of which are used by the workflow manager 122 to schedule the at least one work.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 further includes a pointer manager 160 operably and operationally connected to each of the workflow manager 122 and the storage manager 146.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the pointer manager 160 is configured to identify the address of each one of the first, second, third, fourth, and fifth content items in the storage manager 146. The storage manager 146 may be contained in and associated with the memory system 112, or may be independently operating relative to but remains associated with the memory system 112.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the server computing device 102 further includes a conditional connection manager 162 operably connected to the workflow, storage, and pointer managers 122, 146, 160.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the conditional connection manager 162 is configured to compare the constants and the plurality of variables to define the at least one flow based upon pointers employed by the pointer manager 160, the pointers referencing the one or more workflow rules 176 of the plurality of workflow rules 176.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 is further configured to establish a plurality of subroutines, each subroutine of the plurality of subroutines being represented as a separate logical section within the one or more workflow rules 176 referenced from different locations within the storage unit or the memory system 112, as may be disclosed herein.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, wherein the workflow manager 122 navigates the one or more workflow rules 176 in response to the input data received by the server computer device 102 from the asset end-user computing device 104 over the communication network 106 which is preferably the Internet.

In the non-transitory, computer-readable storage medium aspect of the present invention, and consistent with one or more implementations thereof, the workflow manager 122 causes the content manager 120 to arrange presentation of the content items based on the defined at least one flow and scheduled at least one work, each being referenced in the one or more workflow rules 176 of the plurality of workflow rules 176.

While the present invention has been described with respect to a limited number of implementations and/or embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other implementations and/or embodiments can be devised which do not depart from the scope of the invention as disclosed herein.