Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ARTIFICIAL INTELLIGENCE ASSISTED SPEECH AND IMAGE ANALYSIS IN SUPPORT OPERATIONS
Document Type and Number:
WIPO Patent Application WO/2022/144844
Kind Code:
A1
Abstract:
Artificial-intelligence-based technical support operations and remote artificial intelligence-assisted electronic warranty verification operations are disclosed. The technical support operations may include receiving audio signals and image signals associated with a technical support session from a mobile device, analyzing the signals using artificial intelligence, accessing a data structure to identify an image capture instruction and presenting the same to the mobile device, receiving and analyzing second image signals using artificial intelligence, and determining a support resolution status. The electronic warranty verification operations may include performing product image analysis to identify a product-distinguishing characteristic, performing receipt image analysis to identify product purchase information, using the product-distinguishing characteristic and product purchase information to identify in a universal data structure the specific product, accessing a link to a supplier's warranty data structure to lookup the specific product, receiving a warranty coverage indication, and transmitting an indication of warranty coverage.

Inventors:
YOFFE AMIR (IL)
COHEN EITAN (IL)
Application Number:
PCT/IB2021/062505
Publication Date:
July 07, 2022
Filing Date:
December 30, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TECHSEE AUGMENTED VISION LTD (IL)
International Classes:
H04M3/51; G06T1/20
Foreign References:
US20200404100A12020-12-24
US10410428B12019-09-10
US20200126445A12020-04-23
US9864949B12018-01-09
US20090287534A12009-11-19
Download PDF:
Claims:
CLAIMS

1 . A non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform artificial-intelligence-based technical support operations, the operations comprising: receiving over at least one network first audio signals from a mobile communications device, the first audio signals including speech data associated with a technical support session; receiving first image signals from the mobile communications device via the at least one network, the first image signals including image data associated with a product for which support is sought; analyzing the first audio signals using artificial intelligence; analyzing the first image signals using artificial intelligence; aggregating the analysis of the first audio signals and the first image signals; based on the aggregated analysis of the first image signals and the first audio signals, accessing at least one data structure to identify an image capture instruction; presenting the image capture instruction to the mobile communications device via the at least one network, the image capture instruction including a direction to alter a physical structure identified in the first image signals and to capture second image signals of an altered physical structure; receiving from the mobile communications device second image signals via the at least one network, the second image signals corresponding to the altered physical structure; analyzing the captured second image signals using artificial intelligence; and based on the analysis of the second image signals, determining a technical support resolution status.

2. The non-transitory computer readable medium of claim 1 , wherein the operations further comprise: based on the aggregated analysis of the first audio signals and the first image signals, accessing the at least one data structure containing a plurality of semantic prompts; using the aggregated analysis of the first audio signals and the first image signals to select a first semantic prompt from the at least one data structure; presenting the first semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a first response to the first semantic prompt via the at least one network; analyzing the first response to the first semantic prompt; and based on the analysis of the first response, accessing the at least one data structure to identify the image capture instruction.

3. The non-transitory computer readable medium of claim 2, wherein the operations further comprise: based on the analysis of the second image signals, accessing the at least one data structure containing the plurality of semantic prompts; using the second image signals to select a second semantic prompt from the at least one data structure; presenting the second semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a second response to the second semantic prompt via the at least one network; analyzing the second response to the second semantic prompt; and based on the analysis of the second response, determining the technical support resolution status.

4. The non-transitory computer readable medium of claim 2, wherein the first semantic prompt includes a question presented in text form.

5. The non-transitory computer readable medium of claim 2, wherein the first semantic prompt includes a question presented as synthesized speech.

6. The non-transitory computer readable medium of claim 1 , wherein analyzing the first audio signals, analyzing the first image signals, and aggregating occur in a singular process.

7. The non-transitory computer readable medium of claim 1 , wherein the operations further comprise using the analysis of the first audio signals to categorize subject matter of the first image signals.

8. The non-transitory computer readable medium of claim 2, wherein the operations further comprise using the analysis of the first audio signals to categorize subject matter of the first image signals.

9. The non-transitory computer readable medium of claim 1 , wherein the operations further comprise using the analysis of the first image signals to interpret the first audio signals.

10. The non-transitory computer readable medium of claim 2, wherein the operations further comprise using the analysis of the first image signals to interpret the first audio signals.

11 . The non-transitory computer readable medium of claim 1 , wherein the operations are performed in an automated fashion without human intervention.

12. The non-transitory computer readable medium of claim 1 , wherein the non-transitory computer readable medium is further configured to simultaneously engage in a plurality of support sessions, the plurality of support sessions including the support session with the mobile communications device and support sessions with a plurality of additional mobile communications devices.

13. The non-transitory computer readable medium of claim 2, wherein: the first audio signals identify a technical issue; the first image signals contain images of the product associated with the technical issue; and the first semantic prompt seeks information about the technical issue.

14. The non-transitory computer readable medium of claim 3, wherein: the first audio signals identify a technical issue; the first image signals contain images of the product associated with the technical issue; the first semantic prompt seeks information about the technical issue; and the second semantic prompt seeks information about a change occurring after the alteration of the physical structure.

15. The non-transitory computer readable medium of claim 1 , wherein: the technical support resolution status includes an indication that a technical support issue is resolved; and the operations further include terminating the technical support session.

16. The non-transitory computer readable medium of claim 1 , wherein: the technical support resolution status includes an indication that a technical support issue is not resolved; and the operations further include linking the mobile communications device to a human agent for further assistance.

17. The non-transitory computer readable medium of claim 1 , wherein: the technical support resolution status includes an indication that a technical support issue is not resolved; and the operations further include: sending a prompt to the mobile communications device seeking additional audio signals and additional image signals, analyzing following receipt and in an aggregated fashion, the additional audio signals and the additional image signals, performing an additional lookup in the at least one data structure to determine an associated remedial measure, and presenting the remedial measure to the mobile communications device.

18. The non-transitory computer readable medium of claim 1 , wherein the first image signal analysis and the second image signal analysis include pixel analytics.

19. The non-transitory computer readable medium of claim 1 , wherein the first audio signal analysis includes natural language processing techniques.

20. The non-transitory computer readable medium of claim 1 , the operations further include: after presenting the image capture instruction, receiving from the mobile communications device second audio signals via the at least one network, the second audio signals corresponding to a status of the altered physical structure; analyzing the second audio signals using artificial intelligence; based on the analysis of the second audio signals, accessing the at least one data structure containing the plurality of semantic prompts; using the second audio signals to select a second semantic prompt from the at least one data structure; presenting the second semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a second response to the second semantic prompt via the at least one network; analyzing the second response to the second semantic prompt; and based on the analysis of the second response, determining the technical support resolution status.

21 . The non-transitory computer readable medium of claim 3, the operations further include: after presenting the image capture instruction, receiving from the mobile communications device second audio signals via the at least one network, the second audio signals corresponding to a status of the altered physical structure; analyzing the second audio signals using artificial intelligence; based on the analysis of the second audio signals, accessing the at least one data structure containing the plurality of semantic prompts; and using the second audio signals to select the second semantic prompt from the at least one data structure.

22. A method of performing artificial-intelligence-based technical support operations, the method comprising: receiving over at least one network first audio signals from a mobile communications device, the first audio signals including speech data associated with a technical support session; receiving first image signals from the mobile communications device via the at least one network, the first image signals including image data associated with a product for which support is sought; analyzing the first audio signals using artificial intelligence; analyzing the first image signals using artificial intelligence; aggregating the analysis of the first audio signals and the first image signals; initially accessing at least one data structure containing a plurality of semantic prompts; using the aggregated analysis of the first audio signals and the first image signals to select a first semantic prompt from the at least one data structure; presenting the first semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a first response to the first semantic prompt via the at least one network; analyzing the first response to the first semantic prompt; based on the analysis of the first response, accessing the at least one data structure to identify an image capture instruction; presenting the image capture instruction to the mobile communications device via the at least one network, the image capture instruction including a direction to alter a physical structure identified in the first image signals and to capture second image signals of an altered physical structure; receiving from the mobile communications device second image signals via the at least one network, the second image signals corresponding to the altered physical structure; analyzing the captured second image signals using artificial intelligence; subsequently accessing the at least one data structure to retrieve from the at least one data structure a second semantic prompt; presenting the second semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a second response to the second semantic prompt via the at least one network; analyzing the second response to the second semantic prompt; and based on the analysis of the second response, determining a technical support resolution status.

23. The method of claim 22, further comprising using the analysis of the first audio signals to categorize subject matter of the first image signals.

24. The method of claim 22, further comprising using the analysis of the first image signals to interpret the first audio signals.

25. The method of claim 22, wherein the method is performed in an automated fashion without human intervention.

26. The method of claim 22, wherein the first image signal analysis and the second image signal analysis include pixel analytics.

27. The method of claim 22, wherein the first audio signal analysis includes natural language processing techniques.

28. A system for performing remote artificial intelligence-assisted electronic warranty verification, the system comprising: at least one processor configured to: transmit an instruction to an entity to capture at least one product image of a specific product; receive the at least one product image; perform product image analysis on the at least one product image to identify at least one product-distinguishing characteristic; transmit an instruction to the entity to capture an image of a purchase receipt for the specific product; receive the purchase receipt image; perform receipt image analysis on the received purchase receipt image to identify product purchase information including a purchased product identity and a purchase date; access a universal data structure containing data on products offered by a plurality of suppliers; use the at least one product-distinguishing characteristic obtained from the image analysis on the product image and the product purchase information obtained from the image analysis on the purchase receipt to identify in the universal data structure the specific product; identify in the universal data structure a supplier of the specific product; identify in the universal data structure a link to a warranty data structure of the supplier; access the link to perform a remote lookup of the specific product in the warranty data structure of the supplier; receive a warranty coverage indication from the warranty data structure of the supplier; and transmit to the entity an indication of warranty coverage.

29. The system of claim 28, wherein the product image analysis includes using artificial intelligence to distinguish the product from other products having similar appearances.

30. The system of claim 28, wherein the product image analysis includes performing optical character recognition on the product image.

31 . The system of claim 28, wherein the instruction includes a direction to capture an image of a manufacturer’s product sticker and wherein the product image analysis includes employing artificial intelligence to interpret the manufacturer’s product sticker.

32. The system of claim 28, wherein the receipt image analysis includes employing artificial intelligence to identify an identity of the purchased product, the purchase date, and an identity of an establishment from which the product was purchased.

33. The system of claim 28, wherein the image of the purchase receipt identifies a plurality of purchased products and wherein the at least one processor is configured to apply artificial intelligence to information from the universal data structure in order to match one of the plurality of purchased products on the receipt with the product-distinguishing characteristic determined from the product image in order to determine the corresponding specific product.

34. The system of claim 28, wherein the at least one processor is further configured to determine that the image of the purchase receipt identifies a plurality of purchased products and to transmit a request to the entity to identify a specific one of the plurality of purchased products.

35. The system of claim 34, wherein the request to identify the specific one of the plurality of purchased products includes a request to capture an image of the receipt with an indication in the image identifying the specific one of the plurality of purchased products.

36. The system of claim 28, wherein the indication of warranty coverage includes an instruction on how to achieve a warranty-related remedy.

37. The system of claim 28, wherein the universal data structure includes an authorization code for accessing the warranty data structure of the supplier, and wherein accessing the link includes transmitting the authorization code to the supplier.

38. The system of claim 28, wherein the warranty coverage indication includes an authorization to collect warranty compensation from the supplier.

39. The system of claim 28, wherein the warranty coverage indication includes a conclusion of non-coverage.

40. The system of claim 28, wherein the supplier is at least one of a manufacturer or a manufacturer’s agent.

41 . A non-transitory computer readable medium containing instructions for performing remote artificial intelligence-assisted electronic warranty verification operations, the operations comprising: transmitting an instruction to an entity to capture at least one product image of a specific product; receiving the at least one product image; performing product image analysis on the at least one product image to identify at least one product-distinguishing characteristic; transmitting an instruction to the entity to capture an image of a purchase receipt for the specific product; receiving the purchase receipt image; performing receipt image analysis on the received purchase receipt image to identify product purchase information including a purchased product identity and a purchase date; accessing a universal data structure containing data on products offered by a plurality of suppliers; using the at least one product-distinguishing characteristic obtained from the image analysis on the product image and the product purchase information obtained from the image analysis on the purchase receipt to identify in the universal data structure the specific product; identifying in the universal data structure a supplier of the specific product; identifying in the universal data structure a link to a warranty data structure of the supplier; accessing the link to perform a remote lookup of the specific product in the warranty data structure of the supplier; receiving a warranty coverage indication from the warranty data structure of the supplier; and transmitting to the entity an indication of warranty coverage.

42. The non-transitory computer readable medium of claim 41 , wherein the operations further include: determining that the purchase receipt identifies a plurality of purchased products and applying artificial intelligence to information from the universal data structure in order to match one of the plurality of purchased products on the receipt with the product-distinguishing characteristic determined from the product image in order to determine the corresponding specific product.

43. The non-transitory computer readable medium of claim 41 , wherein the universal data structure includes an authorization code for accessing the warranty data structure of the supplier, and wherein accessing the link includes transmitting

120 the authorization code to the supplier.

44. The non-transitory computer readable medium of claim 41 , wherein the operations further include: determining that the image of the purchase receipt identifies a plurality of purchased products and transmitting a request to the entity to identify a specific one of the plurality of purchased products.

45. A method for performing remote artificial intelligence-assisted electronic warranty verification operations, the method comprising: transmitting an instruction to an entity to capture at least one product image of a specific product; receiving the at least one product image; performing product image analysis on the at least one product image to identify at least one product-distinguishing characteristic; transmitting an instruction to the entity to capture an image of a purchase receipt for the specific product; receiving the purchase receipt image; performing receipt image analysis on the received purchase receipt image to identify product purchase information including a purchased product identity and a purchase date; accessing a universal data structure containing data on products offered by a plurality of suppliers; using the at least one product-distinguishing characteristic obtained from the image analysis on the product image and the product purchase information obtained from the image analysis on the purchase receipt to identify in the universal data structure the specific product; identifying in the universal data structure a supplier of the specific product; identifying in the universal data structure a link to a warranty data structure of the supplier; accessing the link to perform a remote lookup of the specific product in the warranty data structure of the supplier; receiving a warranty coverage indication from the warranty data structure of the supplier; and transmitting to the entity an indication of warranty coverage.

46. The method of claim 45, wherein the method further comprises: determining that the purchase receipt identifies a plurality of purchased products and applying artificial intelligence to information from the universal data structure in order to match one of the plurality of purchased products on the receipt with the product-distinguishing characteristic determined from the product image in order to determine the corresponding specific product.

47. The method of claim 45, wherein the method further comprises: determining that the image of the purchase receipt identifies a plurality of purchased products and transmitting a request to the entity to identify a specific one of the plurality of purchased products.

Description:
ARTIFICIAL INTELLIGENCE ASSISTED SPEECH AND IMAGE ANALYSIS IN SUPPORT OPERATIONS

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims the benefit of U.S. Provisional Patent Application No. 63/131 ,875, filed December 30, 2020, which is incorporated by reference herein in its entirety.

TECHNOLOGICAL FIELD

[0002] Embodiments of the present disclosure relate generally to the field of artificial-intelligence-based support operations. More particularly, embodiments of the present disclosure relate to systems, methods, and non-transitory computer readable medium capable of performing artificial-intelligence-based technical support operations to assist a user with technical support, and systems, methods, and non- transitory computer readable medium capable and performing remote artificial intelligence-assisted electronic warranty verification to assist an entity with warranty verification of a product.

BACKGROUND

[0003] Technical support systems utilized nowadays make it difficult for digital service providers (DSPs), and especially for service/technical support centers, to provide efficient, in terms of time and customer satisfaction, technical support services to their customers. Despite a recent push toward self-service schemes, customers have been slow to adopt self-service technologies. Today’s customers support models and relevant technologies are subject to the numerous challenges, including increasingly complex customer needs, communication gaps, diagnosis challenges, limited problem solving rates, and customer dissatisfaction and frustration.

[0004] Some of the techniques disclosed herein aim to provide remote efficient consumer support services and reduce the incidence of technician dispatch. These techniques are useful for shortening consumer wait time, improving installation and repair outcomes, and improving customer satisfaction and independence.

[0005] Additionally, warranty verification systems utilized nowadays make it difficult for product suppliers, such as manufacturers or sellers, of a given product to provide efficient, in terms of time, accessibility, and accuracy, warranty verification to entities, such as customers or end-users, with respect to a given product. Despite a recent push toward self-service schemes, customers have been slow to adopt self-service technologies. Today’s warranty verification models and relevant technologies are subject to numerous challenges including warranty coverage identification challenges, limited warranty eligibility verification accuracy, warranty processing fees, labor costs, service call charges, communication gaps, increasingly complex extended supply chains, and customer dissatisfaction and frustration.

[0006] Some of the techniques disclosed herein aim to provide remote efficient warranty verification services, reduce the frequency of incorrect warranty coverage indications, and limit the incidence of communication with customer support assistants. These techniques are useful for increasing warranty identification outcomes and eligibility verification accuracy, shortening consumer wait time, and improving customer satisfaction and independence.

SUMMARY

[0007] Embodiments consistent with the present disclosure may provide systems and methods capable of performing artificial-intelligence-based technical support operations, such as speech and/or image analysis during a service session, to assist a user with technical support. The disclosed systems and methods may be implemented using a combination of conventional hardware and software as well as specialized hardware and software, such as a machine constructed and/or programed specifically for performing functions associated with the disclosed embodiments. Consistent with other disclosed embodiments, non-transitory computer readable media may store program instructions, which are executed by at least one processing device and may perform any of the steps and/or methods described herein. [0008] According to one aspect of the present disclosure, the disclosed embodiments may relate to systems, methods, and non-transitory computer readable medium for performing artificial-intelligence-based technical support operations. By way of example only, the operations may include receiving over at least one network first audio signals from a mobile communications device, the first audio signals including speech data associated with a technical support session; receiving first image signals from the mobile communications device via the at least one network, the first image signals including image data associated with a product for which support is sought; analyzing the first audio signals using artificial intelligence; analyzing the first image signals using artificial intelligence; aggregating the analysis of the first audio signals and the first image signals; based on the aggregated analysis of the first image signals and the first audio signals, accessing at least one data structure to identify an image capture instruction; presenting the image capture instruction to the mobile communications device via the at least one network, the image capture instruction including a direction to alter a physical structure identified in the first image signals and to capture second image signals of an altered physical structure; receiving from the mobile communications device second image signals via the at least one network, the second image signals corresponding to the altered physical structure; analyzing the captured second image signals using artificial intelligence; and based on the analysis of the second image signals, determining a technical support resolution status.

[0009] Embodiments consistent with the present disclosure may provide systems, methods, and non-transitory computer readable medium for performing remote artificial intelligence-assisted electronic warranty verification. The systems, methods, and non-transitory computer readable medium disclosed herein may be capable of performing remote artificial intelligence-assisted electronic warranty verification operations which may enable entities, such as a customer or end-user of a product, to automatically and remotely validate warranty coverage with respect to a specific product in interest. For example, the automatic self-service warranty verification operations may include an automatic application and/or process that may be used by remote entities to validate their warranty when a problem is encountered in a product that was recently purchased. The disclosed systems and methods may be implemented using a combination of conventional hardware and/or software as well as specialized hardware and/or software, such as a machine constructed and/or programed specifically for performing functions associated with the disclosed embodiments. Consistent with other disclosed embodiments, non-transitory computer readable media may store program instructions, which may be executed by at least one processing device and may perform any of the steps and/or methods described herein.

[00010] According to another aspect of the present disclosure, the disclosed embodiments may relate to systems, methods, and non-transitory computer readable medium for performing remote artificial intelligence-assisted electronic warranty verification operations. By way of example only, the operations may include transmitting an instruction to an entity to capture at least one product image of a specific product; receiving the at least one product image; performing product image analysis on the at least one product image to identify at least one productdistinguishing characteristic; transmitting an instruction to the entity to capture an image of a purchase receipt for the specific product; receiving the purchase receipt image; performing receipt image analysis on the received purchase receipt image to identify product purchase information including a purchased product identity and a purchase date; accessing a universal data structure containing data on products offered by a plurality of suppliers; using the at least one product-distinguishing characteristic obtained from the image analysis on the product image and the product purchase information obtained from the image analysis on the purchase receipt to identify in the universal data structure the specific product; identifying in the universal data structure a supplier of the specific product; identifying in the universal data structure a link to a warranty data structure of the supplier; accessing the link to perform a remote lookup of the specific product in the warranty data structure of the supplier; receiving a warranty coverage indication from the warranty data structure of the supplier; and transmitting to the entity an indication of warranty coverage.

[00011] The foregoing general description provides only a few examples of the disclosed embodiments and is not intended to summarize all aspects of the disclosed embodiments. Moreover, the following detailed description is exemplary and explanatory only and is not restrictive of the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[00012] In order to better understand various aspects of the present disclosure and to see how they may be carried out in practice, certain embodiments will now be described, by way of non-limiting examples only, with reference to the accompanying drawings. Features shown in the drawings, which are incorporated in and constitute a part of this disclosure, are meant to be illustrative of only some embodiments of aspects of the invention, unless otherwise implicitly indicated. In the drawings like reference numerals are used to indicate corresponding parts.

[00013] Fig. 1A is a simplified network diagram illustrating exemplary communications between a remote user and a technical support center via at least one network during an artificial-intelligence-based technical support session, consistent with at least one embodiment of the present disclosure.

[00014] Fig. 1 B illustrates certain communication aspects from the perspective of the remote user illustrated in Fig. 1 A, consistent with at least one embodiment of the present disclosure.

[00015] Fig. 2 is a sequence diagram illustrating possible stages in the communication establishment process of an artificial-intelligence-based technical support session, consistent with at least one embodiment of the present disclosure.

[00016] Fig. 3 is a flow chart illustrating an exemplary method for an artificial- intelligence-based technical support session, consistent with at least one embodiment of the present disclosure.

[00017] Fig. 4 is a functional flow chart schematically illustrating an artificial- intelligence-based technical support session, consistent with at least one embodiment of the present disclosure.

[00018] Figs. 5A-5D illustrate audio signals and analysis thereof during an artificial- intelligence-based technical support session, consistent with at least one embodiment of the present disclosure. [00019] Fig. 6 illustrates image signals and analysis thereof during an artificial- intelligence-based technical support session, consistent with at least one embodiment of the present disclosure.

[00020] Fig. 7 is a block diagram illustrating components of a control unit and a data structure of the technical support center as illustrated in Fig. 1 A, consistent with at least one embodiment of the present disclosure.

[00021] Fig. 8 is a functional block diagram illustrating a system of the technical support center, consistent with at least one embodiment of the present disclosure.

[00022] Figs. 9A-9F illustrates an application of the data processing unit and sequential semantic prompts relating to the solution displayed on the mobile communications device, consistent with at least one embodiment of the present disclosure.

[00023] Fig. 10 is a flow chart illustrating an exemplary method for an artificial- intelligence-based technical support session, consistent with another embodiment of the present disclosure.

[00024] Fig. 11 is a functional flow chart schematically illustrating an artificial- intelligence-based technical support session, consistent with another embodiment of the present disclosure.

[00025] Fig. 12 is a simplified network diagram illustrating exemplary communications between an entity seeking warranty verification, a warranty service center, and a product supplier via at least one network during a remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure.

[00026] Fig. 13 is a block diagram illustrating exemplary components of a control system of the warranty service center illustrated in Fig. 12, consistent with at least one embodiment of the present disclosure.

[00027] Fig. 14 is a sequence diagram illustrating exemplary network communications between an entity’s mobile communications device and the warranty service center via at least one network during a remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure.

[00028] Fig. 15 illustrates certain aspects of a remote artificial intelligence-assisted electronic warranty verification session from the perspective of the entity seeking warranty verification, consistent with at least one embodiment of the present disclosure.

[00029] Figs. 16A-16B illustrate exemplary interactive applications relating to the remote artificial intelligence-assisted electronic warranty verification session displayed on an entity’s mobile communications device, as illustrated in Fig. 15.

[00030] Fig. 17 is a flow chart illustrating exemplary image analysis operations of the remote artificial intelligence-assisted electronic warranty verification session related to at least one product image, as illustrated in Figs. 16A-16B.

[00031] Fig. 18 illustrates an exemplary interactive application relating to the remote artificial intelligence-assisted electronic warranty verification session displayed on an entity’s mobile communications device, as illustrated in Fig. 15.

[00032] Fig. 19 is a flow chart illustrating exemplary image analysis operations of the remote artificial intelligence-assisted electronic warranty verification session related to a purchase receipt image, as illustrated in Fig. 18.

[00033] Fig. 20 is a flow chart illustrating exemplary image analysis operations of the remote artificial intelligence-assisted electronic warranty verification session related to a purchase receipt image containing a plurality of products, as illustrated in Fig.

18.

[00034] Fig. 21 is a functional block diagram schematically illustrating certain aspects of a remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure.

[00035] Fig. 22 is a flow chart illustrating exemplary operations of the remote artificial intelligence-assisted electronic warranty verification session related to a universal data structure, consistent with at least one embodiment of the present disclosure. [00036] Fig. 23 is a functional block diagram schematically illustrating certain aspects of a remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure.

[00037] Fig. 24 is a flow chart illustrating exemplary operations of the remote artificial intelligence-assisted electronic warranty verification session involving the warranty service center and the supplier, consistent with at least one embodiment of the present disclosure.

[00038] Fig. 25 is a sequence diagram illustrating exemplary network communications between the warranty service center and the supplier via at least one network during a remote artificial intelligence-assisted electronic warranty verification session, consistent with another embodiment of the present disclosure.

[00039] Fig. 26 is a flow chart illustrating exemplary operations of the remote artificial intelligence-assisted electronic warranty verification session involving the warranty service center and the supplier, consistent with another embodiment of the present disclosure.

[00040] Fig. 27 is a functional block diagram schematically illustrating certain aspects of a remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure.

[00041] Fig. 28 is a flow chart illustrating exemplary operations of the remote artificial intelligence-assisted electronic warranty verification session related to warranty coverage, consistent with at least one embodiment of the present disclosure.

[00042] Figs. 29A-29D illustrate exemplary interactive applications relating to the remote artificial intelligence-assisted electronic warranty verification session displayed on an entity’s mobile communications device, consistent with at least one embodiment of the present disclosure.

DETAILED DESCRIPTION

[00043] The following detailed description provides various non-limiting embodiments, or examples, for implementing different features of the provided subject matter and refers to the accompanying drawings, which are to be considered in all aspects as illustrative only and not restrictive in any manner. Wherever possible, the same reference numerals are used in the drawings and the following description to refer to the same or similar parts. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments discussed. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, and/or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, and/or adding steps to the disclosed methods. Moreover, in certain instances, well-known or conventional details may not be described in order to provide a more concise discussion of the embodiments. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is to be defined by the claims.

[00044] Some aspects of the present disclosure provide remote assistance techniques for identifying technical problems, defects, and improper equipment configurations, and determining a most likely solution to resolve them. More specifically, aspects of the present disclosure relate to methods, systems, and/or non-transitory computer readable medium capable of performing artificial- intelligence-based technical support operations. Aspects of the present disclosure may also be described with reference to a system, method, device, and/or computer readable medium capable of performing remote artificial intelligence-assisted electronic warranty verification operations.

[00045] Various embodiments may be described with reference to a system, method, device, and/or computer readable medium. It is intended that the disclosure of one is a disclosure of all. For example, it is to be understood that the disclosure of a computer readable medium, as described herein, may also constitute a disclosure of methods implemented by the computer readable medium, as well as systems and/or devices for implementing those methods, for example, via at least one processor. Moreover, features of the presently disclosed subject matter are, for brevity, described in the context of particular embodiments. However, it is to be understood that this form of disclosure is for ease of discussion only, and one or more aspects of one embodiment disclosed herein may also be combined with one or more aspects of other embodiments disclosed herein, within the intended scope of this disclosure. Likewise, features and/or steps described in the context of a specific combination may be considered as separate embodiments, either alone or in a context other than the specific combination disclosed.

[00046] Aspects of this disclosure relate to automated remote assistance. By way of an exemplary introduction with reference to Fig. 1A, a user 120 having technical difficulties at home or in the office, may contact a remote technical support center 160 via the user’s mobile communications device 130 (e.g., cell phone). The user 120 may communicate orally through speech (input sound signals 135u) using the cell phone’s microphone (audio sensor 134) to explain the difficulties and may capture and transmit images of the defective equipment (object 110) using the cell phone’s camera (image sensor 132). The audio and image signals may be transmitted over a network, such as a cellular network 140 to a control unit 180 in the technical support center 160. The control unit 180, may be connected to a data structure 190 and a server 156. The control unit 180, accesses information within data structure 190 to interpret the image and audio signals received from the user and to communicate with the user in a way that assists the user in resolving the technical difficulties. The interpretation by the control unit and subsequent interactions with the user may occur via artificial intelligence as described herein.

[00047] Aspects of the disclosure may include a non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform certain operations, for example, artificial- intelligence-based technical support operations. As used herein, the term “non- transitory computer readable medium” should be expansively construed to cover any medium capable of storing data in any memory in a way that may be read by any computing device having at least one processor to carry out operations, methods, or any other instructions stored in the memory. Such instructions, when executed by at least one processor, may cause the at least one processor to carry out a method for performing one or more features or methods of the disclosed embodiments. [00048] The non-transitory computer readable medium may, for example, be implemented as hardware, firmware, software, or any combination thereof. The software may preferably be implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine including any suitable architecture. For example, the machine may be implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described in this disclosure may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. A non-transitory computer readable medium may be any computer readable medium except for a transitory propagating signal.

[00049] Moreover, the term “computer readable medium” may refer to multiple structures, such as a plurality of computer readable mediums and/or memory devices. A memory device may include a Random Access Memory (RAM), a Read- Only Memory (ROM), a hard disk, an optical disk, a magnetic medium, a flash memory, other permanent, fixed, volatile or non-volatile memory, or any other mechanism capable of storing instructions. The memory device may include one or more separate storage devices collocated or disbursed, capable of storing data structures, instructions, or any other data. The memory device may further include a memory portion containing instructions for the processor to execute. The memory device may also be used as a working scratch pad for the processors or as a temporary storage.

[00050] Some embodiments may relate to at least one processor. The term “processor” may refer to any physical device or group of devices having electric circuitry that performs a logic operation on input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including an application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), a central processing unit (CPA), a visual processing unit (VPU), an image signal processor (ISR), server, virtual server, or any other circuits suitable for executing instructions or performing logic operations. The instructions executed by at least one processor may, for example, be pre-loaded into a memory integrated with or embedded into a controller or may be stored in a separate memory. Moreover, the at least one processor may include more than one processor. Each processor may have a similar construction, or the processors may be of differing constructions that are electrically connected or disconnected from each other. For example, the processors may be separate circuits or integrated in a single circuit. When more than one processor is used, the processors may be configured to operate independently or collaboratively. The processors may be coupled electrically, magnetically, optically, acoustically, mechanically or by any other means that permit them to interact.

[00051] As used throughout this disclosure, the terms “processor,” “computer,” “controller,” “control unit,” “processing unit,” “computing unit,” and/or “processing module” should be expansively construed to cover any kind of electronic device, component, or unit with data processing capabilities, including, by way of a nonlimiting example, a personal computer, a wearable computer, a tablet, a smartphone, a server, a computing system, a cloud computing platform, a communication device, a processor, possibly with embedded memory, a single core processor, a multi core processor, a core within a processor, any other electronic computing device, or any combination of the above. The operations, in accordance with the teachings disclosed herein, may be performed by a computer specially constructed or programmed to perform the described functions.

[00052] In the disclosed embodiments, the instructions included on a non-transitory computer readable medium, when executed by the at least one processor, may cause the at least one processor to perform artificial-intelligence-based technical support operations. Artificial-intelligence-based technical support operations, as used herein, may relate to a plurality of operations in which a processing system having at least one processor is configured to utilize artificial intelligence during a technical support session. While artificial intelligence may be utilized during certain technical support operations of the technical support session, it is to be understood that some technical support operations may be executed with or without the use of artificial intelligence.

[00053] As used herein, the term “technical support” may refer to any remote assistance techniques for identifying technical problems, non-technical problems, installation issues, defects, and/or improper equipment configurations; determining a most likely solution to resolve them; determining the extent to which the technical problem, non-technical problem, installation issue, defect, or improper equipment configuration has been remediated; and/or providing guidance on issues that were not resolved during a support session. Technical support may relate to a wide range of services which may be provided to remote users for any type of product.

Technical support, as used herein, is not limited to support pertaining to troubleshooting issues with electronic products such as computers, mobile phones, printers, and electronic, mechanical or electromechanical systems and devices, as well as software which may be utilizable therewith, and may relate to any service session in which a remote user obtains support with respect to any object of interest, including industrial goods, consumer goods, or any other article or substance that is manufactured or refined for sale, which may require customer support. Additionally, technical support may be provided by a technical support center and may utilize a remote customer service agent, artificial intelligence, for example an automated customer service assistant, and/or a combination thereof. It is to be understood that a technical support center, as utilized herein, is not limited to a single support center, and may encompass multiple support centers in different geographic locations, or a plurality of disbursed individuals (e.g., working from home).

[00054] As used herein, the term “artificial intelligence” may refer, for example, to the simulation of human intelligence in machines or processors that exhibit traits associated with a human mind such as learning and problem-solving. Artificial intelligence, machine learning, deep learning, or neural network processing techniques may enable the automatic learning through absorption of huge amounts of unstructured data such as text, audio, images, or videos and user preferences analyzed over a period of time such as through statistical computation and analysis. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recursive neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and/or any other algorithm in which a machine or processor takes inputs and outputs simultaneously in order to “learn” the data and produce outputs when given new inputs. As used herein, artificial intelligence may relate to machine learning algorithms, also referred to as machine learning models, which may be trained using training examples, for example in the cases described below involving image recognition and processing and speech recognition and processing.

[00055] A trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recursive neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples. In some examples, engineers, scientists, processes, and machines that train machine learning algorithms may further use validation examples and/or test examples. For example, validation examples and/or test examples may include example inputs together with the desired outputs corresponding to the example inputs, a trained machine learning algorithm and/or an intermediately trained machine learning algorithm may be used to estimate outputs for the example inputs of the validation examples and/or test examples, the estimated outputs may be compared to the corresponding desired outputs, and the trained machine learning algorithm and/or the intermediately trained machine learning algorithm may be evaluated based on a result of the comparison. In some examples, a machine learning algorithm may have parameters and hyper parameters, where the hyper parameters are set manually by a person or automatically by a process external to the machine learning algorithm (such as a hyper parameter search algorithm), and the parameters of the machine learning algorithm are set by the machine learning algorithm according to the training examples. In some implementations, the hyper-parameters are set according to the training examples and the validation examples, and the parameters are set according to the training examples and the selected hyper-parameters.

[00056] In the disclosed embodiments, the term “artificial-intelligence-based technical support” may relate to any remote support techniques and/or operations which may be provided by way of a technical support center utilizing artificial intelligence to resolve an issue or issues with any object of interest at, for example, a remote user's home, office, or any other remote site. As disclosed herein, artificial- intelligence-based technical support may relate to operations performed by a non- transitory computer readable medium. The operations performed by a non-transitory computer readable medium may relate to a plurality of simultaneous artificial- intelligence-based technical support sessions with a plurality of users. The techniques and/or operations disclosed herein may also be used in interactive applications to assist with the installation and/or troubleshooting of various objects of interest, including, furniture, consumer electronics, appliances, and other products, which may require customer support. It is to be understood that the algorithms and analytics disclosed herein may establish a self-service mechanism in which human support from the technical support center is not required during the artificial- intelligence-based technical support session.

[00057] Turning to the figures, Fig. 1 A depicts a simplified network diagram illustrating exemplary communications between a remote user 120 utilizing a mobile communications device 130 and a technical support center (TSC) 160 via at least one network 140 during an artificial-intelligence-based technical support session 1100, consistent with at least one embodiment of the present disclosure. The remote user 120 may send and/or receive information during the artificial- intelligence-based technical support session 1100 to and/or from the TSC 160 with respect to an object of interest 110. For example, the user’s mobile communications device 130 may be configured to receive input sound signals 135u and input optical signals 133o corresponding to the object of interest 110 and/or elements thereof and transmit data corresponding to said input signals to a control unit 180 of the TSC 160 via at least one network 140.

[00058] The control unit 180, may be connected to a data structure 190 and at least one server and may be configured to access information within a data structure 190 to interpret the speech and audio signals received from the user and to communicate with the user 120 in a way that assists the user in resolving technical difficulties. The at least one server may be implemented as part of the TSC 160, such as remote server 156, and/or in a cloud computing infrastructure, such as remote server 154, accessible to both the remote user 20 and/or the TSC 60. In some embodiments, operations of the artificial-intelligence-based technical support session 1100 may relate to instructions included on a non-transitory computer readable medium which may be executed by at least one processor of the data structure 190.

[00059] Fig. 1 B illustrates certain communication aspects of the artificial-intelligence- based technical support session 1100 from the perspective of the remote user 120 illustrated in Fig. 1A. During the artificial-intelligence-based technical support session 1100, the remote user 120 may receive information from the control unit 180 of the TSC 160 pertaining to an object of interest 110. For example, the user’s mobile communications device 130 may be configured to receive information pertaining to technical support from the control unit 180 of the TSC 160 via at least one network 140. The information received from the TSC 160 may be communicated to the user 120 auditorily (output sound signals 135m) via a speaker unit 136 of the mobile communications device 130 and/or visually (output image signals 133m) via a display unit 131 of the mobile communications device 130 in a way that assists the user in resolving technical difficulties. In some embodiments, the output image signals 133m corresponding to the object of interest 110 may appear on the display unit 131 as an annotated object 111 which may include annotations/markers 139 superimposed onto said annotated object 111. Details pertaining to the various components of the artificial-intelligence-based technical support session 1100, as depicted in Figs. 1A-1 B, will be discussed in greater detail below.

[00060] In certain embodiments, the TSC may be configured to simultaneously engage in a plurality of support sessions including the support session with the mobile communications device and support sessions with a plurality of additional mobile communications devices. For example, a non-transitory computer readable medium may be configured to simultaneously engage in a plurality of artificial- intelligence-based technical support sessions including the support session 1100 with the mobile communications device 130 and multiple additional support sessions, akin to support session 1100, with a plurality of additional mobile communications devices akin to mobile communication device 130.

[00061] In some embodiments, the artificial-intelligence-based technical support operations may be performed in an automated fashion without human intervention. Technical support conducted in an automated fashion without human intervention may enable a user seeking technical support to conduct a self-service support session in which technical support is provided in a fully automated process. For example, during the artificial-intelligence-based technical support session 1100, the control unit 180 of the TSC 160 may be configured to perform certain technical support operations in an automated fashion without intervention from a live customer service agent of the TSC 160.

[00062] Fig. 2 illustrates a sequence diagram depicting possible stages in a technical support session initiation process 1110 and an activation process 1120 of the artificial-intelligence-based technical support session 1100 between the mobile communications device 130 and the TSC 160 via at least one network 140 according to one non-limiting embodiment of the present disclosure. The technical support session initiation process 1110 of the artificial-intelligence-based technical support session 1100 may commence when a remote user calls or otherwise contacts the TSC 160 using a mobile communications device 130. At Step 1111 of the technical support session initiation process 1110, connection initiation may be performed over a cellular and/or landline network, or other communication channels such as satellite communication, voice over IP, or any type of physical or wireless computer networking arrangement used to exchange data.

[00063] When a connection is established between the user’s mobile communications device 130 and the TSC 160, at Step 1112, the TSC 160 may send an activation link, or other means for establishing communication with the TSC 160, to the mobile communications device 130. For example, an activation link may be embedded in a message, such as SMS, email, WhatsApp, or any other means for establishing communication and may contain a hyperlink, such as a URL, sent from the TSC 160 to initiate the artificial-intelligence-based technical support session 1100. Upon accessing the activation link, at Step 1113, the mobile communications device 130 may establish communication with the TSC 160 via at least one network 140. In some embodiments, establishing communication may be achieved by means of an application installed on the mobile communications device 130. Once connection between the user’s mobile communications device 130 and the TSC 160 is established via the at least one network 140, technical support session setup instructions/code may be sent to the mobile communications device 130 at Step 1114.

[00064] Once the technical support session setup code is entered and/or instructions are followed, the activation process 1120 of the artificial-intelligence-based technical support session 1100 may begin. At Step 1121 of the activation process 1120, the TSC 160 may request the remote user’s permission to access and/or activate an image sensor 132, such as a camera, and/or an audio sensor 134, such as a microphone, of the mobile communications device 130. Once the remote user approves the activation request, at Step 1122, the image sensor 132 and/or the audio sensor 134 may be activated at Step 1123 thereby enabling the TSC 160 to request that the remote user direct the image sensor 132 toward the object of interest 110 for which support is sought and/or describe information pertaining to the object of interest 110 into the audio sensor 134. Upon obtaining access to the image sensor 132 and/or audio sensor 134, and corresponding image data and/or audio data, the TSC 160 may simultaneously receive image data and/or audio data from the mobile communications device 130. [00065] In some embodiments, the artificial-intelligence-based technical support operations may involve receiving, over at least one network, audio signals from a mobile communications device and/or receiving image signals from the mobile communications device via the at least one network. As used herein, the term at least one network may refer to a single network or multiple networks. The network may include a plurality of networks having the same or different protocol stacks which may coexist on the same physical infrastructure. The network may constitute any type of physical or wireless computer networking arrangement used to exchange data. For example, a network may be the Internet, a private data network, a virtual private network using a public network, a Wi-Fi network, a LAN, or WAN network, and/or any other suitable connections that may enable information exchange among various components of the system.

[00066] In some embodiments, a network may include one or more physical links used to exchange data, such as Ethernet, coaxial cables, twisted pair cables, fiber optics, or any other suitable physical medium for exchanging data. A network may also include a public switched telephone network (“PSTN”) and/or a wireless cellular network. A network may be a secured network or unsecured network. In other embodiments, one or more components of the system may communicate directly through a dedicated communication network. Direct communications may use any suitable technologies, including, for example, BLUETOOTH™, BLUETOOTH LE™ (BLE), Wi-Fi, near field communications (NFC), or other suitable communication methods that provide a medium for exchanging data and/or information between the mobile communications device and the TSC. Signals are received over a network when they are obtained following transmission via the network.

[00067] Turning back to Fig. 1A, the disclosed embodiments may involve at least one network 140 capable of transmitting image data, audio data, and/or text data between the mobile communications device 130 and the TSC 160 during the artificial-intelligence-based technical support session 1100. For example, the mobile communications device 130 may access and send/receive data to/from the at least one server 154, 156 and/or the control unit 180 of the TSC 160 over the at least one network 140. The control unit 180 of the TSC 160 may also access and send/receive data to/from the mobile communications device 130 and/or the at least one server 154, 156 over the at least one network 140. The at least one server 154, 156 may be configured to collect and/or send information across the at least one network 140 and may be used to facilitate an artificial-intelligence-based technical support session 1100, such as a self-service video support session, between the remote user 120 and the TSC 160. The at least one server may be implemented in the TSC 160, such as remote server 156, and/or in the at least one network 140, such as a remote server 154 in a server farm or in a cloud computing environment. Optionally, the at least one server 154, 156 may be configured to carry out some of the tasks of the control unit 180 of the TSC 160, such as but not limited to, AR functionality, tracker functionality, image recognition and/or processing, and speech recognition and/or processing.

[00068] A mobile communications device, as disclosed herein, is intended to refer to any device capable of exchanging data using any communications network. In some examples, the mobile communications device may include a smartphone, a tablet, a smart watch, mobile station, user equipment (UE), personal digital assistant (PDA), laptop, wearable sensor, e-Readers, dedicated terminals, smart glasses, virtual reality headset, loT device, and any other device, or combination of devices, that may enable user communication with a remote server, such as a server of the TSC. Such mobile communications devices may include a display such as an LED display, augmented reality (AR), virtual reality (VR) display, and any other display capable of depicting image data. A mobile communications device may also include an audio sensor and/or image sensor.

[00069] An audio sensor, as used herein, is recognized by those skilled in the art and may generally refer to any device capable of capturing audio signals by converting sounds to electrical signals. In some examples, audio sensors may include microphones, unidirectional microphones, bidirectional microphones, cardioid microphones, omnidirectional microphones, onboard microphones, wired microphones, wireless microphones, any combination of the above, and any other sound-capturing device capable of detecting and converting sound signals into electrical signals, such as analog or digital audio signals. The electrical signals may be used to form audio data based on the detected sound signals. In some cases, the audio sensor or sensors may be part of the microphone included in the remote end user’s mobile communications device. Alternatively, or additionally, the audio sensor or sensors may be part of an external device, such as a headset, that is connectable to the mobile communications device via a wired or wireless connection.

[00070] An image sensor, as used herein, is recognized by those skilled in the art and may generally refer to any device capable of detecting and converting optical input signals in the near-infrared, infrared, visible, ultraviolet, and/or any other light spectrum into electrical signals. The electrical signals may be used to form image data, such as an image, burst of images, or a video stream, based on the detected signal. Some examples of image sensors may include semiconductor charge- coupled devices (CCD), active pixel sensors in complementary metal-oxide- semiconductor (CMOS), or N-type metal-oxidesem iconductors (NMOS, Live MOS). The image sensor may also include both 2D and 3D sensors which may be implemented using different technologies such as stereo camera, active stereo camera, time of flight camera, structured light camera, radar, range image camera. In some cases, the image sensor or sensors may be part of a camera included in the remote end user’s mobile communications device. Alternatively, or additionally, the image sensor or sensors may be connectable to the mobile communications device via a wired or wireless connection.

[00071] Figs. 1A and 1 B illustrate one example of a mobile communications device 130 including an audio sensor 134, a speaker unit 136, an image sensor 132, and a display unit 131 . The amount of information which may be exchanged between the remote user 120 and the TSC 160 via at least one network 140 is considerably increased by using, at the remote site, a mobile communications device 130 capable of exchanging and/or receiving image data and/or audio data, and optionally text data. For example, data transferred from the mobile communications device 130 to the TSC 160 over the at least one network 140, such as data pertaining to input sound signals 135u from the user 120 and/or input optical signals 133o corresponding to the object of interest 110, may enable the TSC 160 to more accurately provide technical support to the remote user 120. In some embodiments, information transferred over the at least one network 140 to the mobile communications device 130 may be visually displayed on a display unit 131 of the mobile communications device 130 as output image signals 133m and/or auditorily produced by a speaker unit 136 of the mobile communications device 130 as output sound signals 135m.

[00072] The audio sensor 134 may be located on any portion of the mobile communications device 130 and/or connectable to the mobile communications device 130. The audio sensor 134 may be configured to receive input sound signals 135u, such as an audible description pertaining to the issue for which support is sought, from the remote user 120 such that a digital representation of the input sound signals 135u may be transferable over the at least one network 140 to the TSC 160 as audio data. Alternatively, or additionally, the audio sensor 134 may be configured to receive input sound signals 135u from the object of interest 110 which may be transferable over the at least one network 140 to the TSC 160 as audio data. The speaker unit 136 may be located on any portion of the mobile communications device 130 and/or connectable to the mobile communications device 130. In certain embodiments, the speaker unit 136 may be configured to produce output sound signals 135m, such as troubleshooting instructions, to the remote user 120.

[00073] The image sensor 132 may be located on any portion of the mobile communications device 130 and/or connectable to the mobile communications device 130. The image sensor 132 may be configured to receive input optical signals 133o from the remote user 120, such as input optical signals 133o corresponding to the object of interest 110 and/or elements thereof, such that a digital representation of said input optical signals 133o may be transferable over the at least one network 140 to the TSC 160 as image data. The display unit 131 may be located on any portion of the mobile communications device 130 and/or connectable to the mobile communications device 130. In certain embodiments, the display unit 131 may be configured to present output image signals 133m, such as an image, images, and/or video. The output image signals 133m may include troubleshooting instructions and/or annotations/markers 139 superimposed onto the annotated object 111.

[00074] According to some embodiments, the artificial-intelligence-based technical support operations may involve receiving, over at least one network, first audio signals from a mobile communications device. The first audio signals may include speech data associated with a technical support session. The term “audio signals,” as disclosed herein, may refer to any electrical representation of sound, which may, for example, be in the form of a series of binary numbers for digital signals and may be carried over digital audio interfaces, over a network using audio over Ethernet, audio over IP, or any other streaming media standards and systems known in the art. The term “speech data,” as disclosed herein, may relate to any data corresponding to audio recordings of human speech. The speech data may be paired with a text transcription of the speech and may be readable and usable for artificial intelligence speech recognition algorithms. For example, various analysis tools may be used to extract and/or analyze the speech data to identify keywords within the speech and/or aid computer vision tools.

[00075] Fig. 3 is a flow chart illustrating an exemplary method of the artificial- intelligence-based technical support session 1100 illustrated in Fig. 1A in which the TSC 160 receives audio data captured by the audio sensor 134 or sensors of the mobile communications device 130 at Step 1131. Fig. 4 is a simplified functional flow chart schematically illustrating the artificial-intelligence-based technical support session 1100, as illustrated in Fig. 1A, in which audio data is captured by the audio sensor 134 or sensors of the user’s mobile communications device 130 at Step 1101 and in which the TSC 160 receives the first audio signals 135s1 representing the audio data via the at least one network 140 at Step 1131. The following examples are presented with reference to the functional flow chart of Fig. 4 together with the network diagram of Fig. 1A, as well as the illustrations of Figs. 5A-5D.

[00076] At Step 1101 in Fig. 4, after the remote user 120 establishes connection with the TSC 160, as illustrated in Fig. 1A, the remote user 120 may, by way of input sound signals 135u, verbally describe an issue request, request troubleshooting assistance, identify the type of object of interest 110 being used, and/or describe a particular issue pertaining to the object of interest 110 into the audio sensor 134. The input sound signals 135u received by the mobile communications device 130 may be transmissible, via the at least one network 140, as first audio signals 135s1. In certain embodiments, the first audio signals 135s1 may include speech data associated with an artificial-intelligence-based technical support session 1100. For example, the first audio signals 135s1 may identify a technical issue, that may relate to an audio description of the type of service the remote user specifies, may describe the nature of the issue in natural language, and/or may relate to what actions the user took prior to the technical support session 1100. Alternatively, or additionally, the first audio signals 135s1 may relate to an audible demonstration of the type of issue pertaining to the object of interest 110.

[00077] The audio data acquired by the audio sensor 134 or sensors of the mobile communications device 130 may then be transmitted as first audio signals 135s1 over the at least one network 140 by wired or wireless transmission to the TSC 160 at Step 1131 in Fig. 4. In some embodiments, the first audio signals 135s1 received by the TSC 160 may be a “real-time” audio stream. The term “real-time,” as it pertains to audio signals, may relate to on-going audio data which closely approximates events as they are occurring. Such real-time audio streams or “live” feeds may allow, for example, a user 120 to auditorily communicate with respect to an ongoing issue, installation, or repair at one location, and allow the TSC 160 to receive near simultaneous audio signals pertaining to an ongoing issue, installation, or repair. Specific keywords contained in the speech data of the first audio signals 135s1 may be associated with the issues at hand and may be indicative of the type of equipment, appliance, or object of interest 110 needing support.

[00078] Figs. 5A-5D illustrate certain non-limiting applications of speech communications in which the remote user 120 requests troubleshooting assistance pertaining to a faulty item/equipment and/or the nature of the encountered problem as it pertains to the object of interest via input sound signals 135u during an artificial- intelligence-based technical support session 1100. For example, the remote user 120 may ask for troubleshooting assistance concerning the object of interest such as an electronic device, as illustrated in Fig. 5A, the remote user 120 may specify the type of problem related to the electronic device, as illustrated in Fig. 5B, the remote user 120 may report the type of electronic device being used, as illustrated in Fig. 5C, and/or the remote user 120 may report the working status of the electronic device, as illustrated in Fig. 5D.

[00079] According to some embodiments, the artificial-intelligence-based technical support operations may involve receiving first image signals from the mobile communications device via the at least one network. The first image signals may include image data associated with a product for which support is sought. The term “image data” may refer to any form of data generated based on optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums (or any other suitable radiation frequency range). Consistent with the present disclosure, the image data may include pixel data streams, digital images, digital video streams, data derived from captured images, and data that may be used to construct a 3D image. As recited herein, the image data may relate to a single image, a combination or a burst of a sequence of images, a video or videos, or any combination thereof. A product for which support is sought may including any merchandise, article, hardware, software, item, or any other goods for which a customer or a user may seek assistance.

[00080] Turning back to the figures, the flow chart of Fig. 3 illustrates an exemplary method of the artificial-intelligence-based technical support session 1100 illustrated in Fig. 1A in which the TSC 160 receives image data captured by the image sensor 132 or sensors of the mobile communications device 130 at Step 1132. Fig. 4 schematically illustrates a functional flow chart of the artificial-intelligence-based technical support session 1100, as illustrated in Fig. 1A, in which image data is captured by the image sensor 132 or sensors of the mobile communications device 130 at Step 1102 and in which the TSC 160 receives the first image signals 133s1 corresponding to the image data via the at least one network 140 at Step 1132. The following examples are presented with reference to the functional flow chart of Fig. 4 together with the network diagram of Fig. 1 A, as well as the illustration of Fig. 6.

[00081] At Step 1102 in Fig. 4, after the remote user 120 establishes connection with the TSC 160, the remote user 120 may, by way of input optical signals 133o corresponding to the object of interest 110 captured by the image sensor 132, visually capture a particular issue pertaining to the object of interest 110, as illustrated in Fig. 1A. The image data captured by the image sensor 132 of the mobile communications device 130 may be transmissible by wired or wireless transmission, via the at least one network 140, as first image signals 133s1 to the TSC 160. In some embodiments, the first image signals 133s1 may include image data associated with a technical support session 1100. For example, the first image signals 133s1 may contain images of the product associated with the technical issue. The first image signals 133s1 may be sent to and received by the TSC 160, via at least one network 140, over a separate channel or channels than the first audio signals 135s1. Alternatively, or additionally, the image data and the audio data may be sent to the TSC 160, via at least one network 140, over a shared channel or channels.

[00082] The first image signals 133s1 may include images and/or videos of object or objects of interest for which support is sought, such as an electronic device or devices. The object of interest may include one or more functional elements. The term “functional element” may refer to any component of an object of interest, such as an electrical device, that aids in the function or operation of the object of interest. Such elements may include, for example, jacks, ports, cables, buttons, triggers, indicator lights and switches. It is to be understood that a functional element is not limited to the tangible components of the object of interest. In certain embodiments, the functional elements may relate to a widget of a graphical user interface (GUI) of the object of interest such as a button located on the screen of an object of interest with which a user may interact such that if the widget is pressed by, for example, mouse click or finger tap, some process might be initiated on the object of interest. A given object of interest may have a multitude of functional elements that are part of the object of interest and/or related to the object of interest.

[00083] Fig. 6 illustrates a non-limiting example of input optical signals produced by an object of interest 110 having a first functional element 114a, a second functional element 114b, a third functional element 114c, and a fourth functional element 114d over a period of time (t). The images shown at times t1-t4 correspond to various input optical signals captured by the image sensor 132 of the mobile communications device 130 over an interval of time (t). The captured optical signals are transmissible over the at least one network as first image signals 133s1 to at least one server and/or the TSC during the artificial-intelligence-based technical support session 1100 at Step 1132 in Fig. 4. For example, at Step 1132 the TSC 160 may receive first image signals 133s1 which indicate that certain functional elements 114, such as an LED, is blinking at a given frequency. In the non-limiting example illustrated in Fig. 6, at times t1 and t3, the second functional element 114b is “ON;” whereas at times t2 and t-4, the second functional element 114b is “OFF.” Such first image signals 133s1 may be analyzable by the control unit 180 of the TSC 160 illustrated in Fig. 1A and enable the TSC 160 to determine the nature of the encountered problem as it pertains to the object of interest 110.

[00084] Additionally, or alternatively, the first image signals 133s1 received at Step 1132 in Fig. 4 may relate to text shown in or on electronic device displays such as PC/laptop monitors and TV displays, control panels, product stickers, and/or any element of the object of interest containing recognizable text such as characters and/or symbols. For example, in the non-limiting example illustrated in Fig. 6, the first image signals 133s1 may relate to characters, such as “text 1 ,” “text 2,” “text 3,” and/or “text 4,” and/or symbols, such as “logo,” contained on the object of interest 110 and contained in the first image signals 133s1.

[00085] The image data acquired by the image sensor 132 or sensors of the mobile communications device 130 illustrated in Fig. 1A may then be transmitted as first image signals 133s1 over the at least one network 140 by wired or wireless transmission to the TSC 160 at Step 1132 in Fig. 4. In some embodiments, the first image signals 133s1 received by the TSC 160 may be a “real-time” video stream. The term “real-time,” as it pertains to image signals, may relate to on-going image data which closely approximates events as they are occurring. Such real-time video streams or “live” feeds may allow, for example, a user to visually record an ongoing issue, installation, and/or repair at one location using the mobile communications device 130 and allow the TSC 160 to receive near simultaneous image signals pertaining to the ongoing issue, installation, or repair. The “real-time” first image signals 133s1 and first audio signals 135s1 may be transmitted to at least one processor of the TSC 160 in a single flow or separate sub-flows. Alternatively, either “real-time” first image signals 133s1 or first audio signals 135s1 may be transmitted to, and received by, at least one processor of the TSC 160.

[00086] In some embodiments, the artificial-intelligence-based technical support operations may involve using artificial intelligence to analyze the first audio signals and the first image signals received from the remote end user. The term “analyzing,” as used herein, may refer to any the above or below discussed artificial intelligence, machine learning, deep learning, or/or neural network processing techniques. Such techniques may, for example, enable machine learning through absorption of significant volumes of unstructured data such as text, audio, images, and/or videos, as well as user preferences analyzed over a period of time. Any suitable computing system or group of computing systems may be used to implement the analysis of image data, audio data, and/or text data using artificial intelligence.

[00087] Fig. 7 is a block diagram illustrating a non-limiting embodiment of hardware implementations of the control unit 180 of the TSC 160, as well as the data structure 190 of the TSC 160, as depicted in Fig. 1A. Fig. 8 is a functional block diagram illustrating certain components of the technical support system, consistent with at least one embodiment of the present disclosure. The control unit 180 may be comprised of one or more processors, such as a neural network processor, and may include a data processing unit 180p, a memory unit 180m, and an input/output unit (I/O unit) 180n. The data processing unit 180p may be configured and operable to process and analyze image data, audio data, and/or text data received from the mobile communications device using artificial intelligence and may include an image processing module 182, a video tracking module 183, a speech processing module 184, a data aggregation module 185, an optical character recognition module 187, and a comparison module 189. The memory unit 180m may be configured and operable as a non-transitory computer readable medium and/or any form of computer readable media capable of storing computer instructions and/or application programs and/or data capable of controlling the control unit 180 and may also store one or more databases. The I/O unit 180n may be configured and operable to send and/or receive data over at least one network and/or to at least one server.

[00088] The image processing module 182 may be configured and operable to process and analyze image data from the mobile communications device 130. The video tracking module 183 may be configured and operable to ensure that annotations/markers 139 superimposed onto image data remain anchored to a desired object of interest 110, and/or functional element 114 thereof, while the image sensor 132 and/or object of interest 110 move. The speech processing module 184 may be configured and operable to process and analyze the audio signals received from the mobile communications device 130. The data aggregation module 185 may be configured and operable to aggregate data, such as image data, audio data, and/or text data, from various sources, for example the speech processing module 184 an/or the image processing module 182. The optical character recognition (OCR) module 188 may be configured and operable to identify letters/symbols within image data, which may be used to guide the speech processing module 184 and/or the image processing module 182. The comparison module 189 may be configured and operable to compare a data structure 190 of stored reference data to newly acquired input image data and/or audio data.

[00089] In some embodiments, the control unit 180 may include a non-transitory computer readable medium capable of performing certain operations of the artificial- intelligence-based technical support session 1100 illustrated in Fig. 1A. However, it is to be understood, as noted above, that the non-transitory computer readable medium is not limited to such an implementation. In some embodiments, the data processing unit 180p of the control unit 180 may be configured and operable to perform certain operations of the artificial-intelligence-based technical support session 1100 using artificial intelligence via the image processing module 182, such as analyzing the first image signals 133s1 , or corresponding image data, received from the mobile communications device and identifying keywords indicative of the problematic/defective object of interest, its elements/components and/or the nature of the problems experienced by the user. Additionally, or alternatively, the data processing unit 180p may be configured and operable to perform certain operations of the artificial-intelligence-based technical support session 1100 using artificial intelligence via the speech processing module 184, such as analyzing the first audio signals 135s1 , or corresponding audio data, received from the mobile communications device and detecting the object of interest and/or elements related to the problem to be resolved.

[00090] According to some embodiments, the artificial-intelligence-based technical support session may involve analyzing the first audio signals using artificial intelligence. Audio signals and/or audio data may be analyzed using any of the above-mentioned artificial intelligence techniques. Alternatively, or additionally, audio signals and/or audio data may be analyzed using linguistic processing such as, for example, Natural Language Processing (NLP) techniques. Linguistic processing may involve determining phonemes (word sounds), applying phonological rules so that the sounds may be legitimately combined to form words, applying syntactic and semantic rules so that the words may be combined to form sentences, and other functions associated with identifying, interpreting, and regulating words or sentences. For example, a user may provide an audible input such as by “speaking” to select an automation package, or an automation within the selected automation package, or a condition of an automation within a selected automation package. In some embodiments, the mapping may occur using a combination of linguistic processing and artificial intelligence.

[00091] Referring back to the flow chart of Fig. 3 illustrating an exemplary method of the artificial-intelligence-based technical support session 1100 illustrated in Fig. 1A, the TSC 160 may be configured to analyze the first audio signals 135s1 at Step 1141 received from the mobile communications device 130 at Step 1131. Fig. 4 schematically illustrates the artificial-intelligence-based technical support session 1100 illustrated in Fig. 1A in which the first audio signals 135s1 received by the TSC 160 are analyzed by artificial intelligence at Step 1141. In some embodiments, the first audio signals 135s1 received by the TSC 160 from remote user 120 at Step 1131 in Fig. 4 may be processed and analyzed by the control unit 180 of the TSC 160 illustrated in Fig. 7. For example, the speech processing module 184 of the data processing unit 180p at Step 1141 in Fig. 4 may be configured and operable to analyze first audio signals 135s1 corresponding to the remote user’s speech using various artificial intelligence technologies during the artificial-intelligence-based technical support session 1100. The various speech analysis and/or processing tools employing artificial intelligence known in the art may be used to analyze the user's speech to identify and/or extract keywords pertaining to the object of interest for which support is sought.

[00092] The keywords extracted from the first audio signals 135s1 may be associated with the issues at hand for which support is sought and may be indicative of the type of equipment, appliance, or object of interest needing support. Referring back to the example illustrated with respect to Figs. 5A-5D, the speech processing module 184 may identify that the remote user specifies the type of service the remote user needs including support with installation, as depicted in Fig. 5A (e.g., “I need assistance with fixing a problem”), repairing a technical issue, as depicted in Fig. 5B (e.g., “my internet connection is down”), registering a product, or reporting the type of object of interest being used, as depicted in Fig. 5C (e.g., “I have Siemens 400 Wi-Fi router”). In another example, the speech processing module 184 may identify that the remote user describes the nature of a given issue in natural language. In another example, the speech processing module 184 may identify that the remote user reports what actions he took prior to the artificial-intelligence-based technical support session 1100. In another example, the speech processing module 184 may identify that the remote user reports, during the artificial-intelligence-based technical support session 1100, how things are coming along an/or issues the remote user is facing (e.g., “I cannot plug the cable into the green socket”). Additionally, or alternatively, the speech processing module 184 may be trained to identify and/or predict certain technical solutions based on learning through linguistic processing and/or historical records.

[00093] Disclosed embodiments may involve natural language processing (NLP) techniques. An NLP technique may include any method that enables machines to understand the human language. Such techniques may involve lexical (structure) analysis, parsing, semantic analysis, discourse integration, and pragmatic analysis. For example, using such techniques, the speech processing module 184 may be configured and operable to analyze the audio signals, such as the first audio signals 135s1 , using natural language processing (NLP) techniques. Optionally, the analysis and/or processing tools in the speech processing module 184 may be used to analyze and/or process varying sentence structures, dialects, and/or languages.

[00094] According to some embodiments, the artificial-intelligence-based technical support session may involve analyzing the first image signals using artificial intelligence. Image signals and/or image data may be analyzed using any of the above-mentioned artificial intelligence techniques. Alternatively, or additionally, analyzing image data may include analyzing the image data to obtain reprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may comprise the transformed image data. For example, the transformed image data may comprise one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may comprise a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may comprise: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to edges; comers; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth.

[00095] In some embodiments, analyzing image data may comprise analyzing the image data and/or the preprocessed image data using one or more rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, action detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and so forth. Some non-limiting examples of such inference models may include: an inference model preprogrammed manually; a classification model; a regression model; a result of training algorithms, such as machine learning algorithms and/or deep learning algorithms, on training examples, where the training examples may include examples of data instances, and in some cases, a data instance may be labeled with a corresponding desired label and/or result. Additionally, analyzing image data may comprise analyzing pixels, voxels, point cloud, range data, etc. included in the image data.

[00096] Referring back to the exemplary method of the artificial-intelligence-based technical support session 1100 illustrated in Fig. 3, the TSC 160 may be configured to analyze the first image signals 133s1 at Step 1142 received from the mobile communications device 130 at Step 1132. Fig. 4 schematically illustrates the artificial-intelligence-based technical support session 1100, in accordance with some embodiments, in which the first image signals 133s1 received by the TSC 160 are analyzed by artificial intelligence at Step 1142. The first image signals 133s1 received by the TSC 160 from mobile communications device 130 at Step 1132 may be processed and/or analyzed by the control unit 180 of the TSC 160 illustrated in Fig. 7. For example, the image processing module 182 of the data processing unit 180p may be configured to analyze first image signals 133s1 corresponding to the input optical signals 133o of the object of interest 110 captured by the image sensor 132, as illustrated in Fig. 1A. Various image analysis and/or processing tools by the image processing module 182 may be employed to analyze a sequence of first image signals 133s1 , such as live video, a burst of several images, or any data that includes a sequence of images, throughout the artificial-intelligence-based technical support session 1100.

[00097] The image processing module 182 illustrated in Fig. 7 may be configured to execute at least any one of the following features or multiple combinations thereof. In one example, the image processing module 182 may be configured to handle and analyze a sequence of image signals including the first image signals 133s1. In another example, the image processing module 182 may be configured to identify the more relevant images among the sequence of first image signals 133s1 and compare various differences identified within the first image signals 133s1 . In another example, the image processing module 182 may be configured to provide feedback to the remote user as to status the of the artificial-intelligence-based technical support session 1100. In another example, an algorithm, such as a deep learning algorithm, may be used by the image processing module 182 to detect adjustments to the object of interest in the first image signals 133s1 to identify possible changes, modifications, adjustments, repairs, and/or any other action taken by the remote user.

[00098] In some instances, pixel analytics may be employed. Pixel analytics includes any technique that compares groups or distributions of pixels across images to identify similarities. Analytics may be performed on a target image or portion thereof and compared with prior analyses of earlier captured images. The prior analyses may be contained in a data structure. In some instances, artificial intelligence techniques may be employed to identify similarities between images. By way of example, the image processing module 182 may be configured and operable to conduct pixel analytics of the first image signals 133s1 .

[00099] In one non-limiting example, the image processing module 182 may be configured and operable to conduct LED analysis on the object of interest. For example, the image processing module 182 may be configured to analyze a blinking LED, identify the blinking color (e.g., white/green/red), and/or identify the blinking frequency (e.g., slow/fast). Referring back to the example illustrated with respect to Fig. 6, a sequence of images that were captured rapidly as a burst of several image signals extracted from a video stream may be processed and analyzed by the image processing module 182. For example, the image processing module 182 may be configured to recognize the object of interest 110 and each of the relevant functional elements 114 (i.e., first functional element 114a, second functional element 114b, third functional element 114c, and fourth functional element 114d); wherein each of these functional elements 114 represent a different LED (e.g., power LED, Ethernet LED, Internet LED, and WPS LED). Upon recognizing the various functional elements 114, the image processing module 182 may also identify the individual characteristics, such as color, of the individual functional elements 114a-114d, any characters and/or symbols on or near, and pertaining to, the individual functional elements 114a-114d. [000100] The image processing module 182 may also be configured to identify the status of the functional elements 114a-114d (e.g., ON/OFF) and/or the blinking frequency of the functional elements 114a-114d (e.g., slow/fast) over a period of time (t). For example, the LED light of the third functional element 114c may indicate the status of Internet connectivity. The LED of the third functional element 114c may be in three different states: solid blue (Internet status is functioning properly); OFF (Internet access is down) or blinking blue (Internet access is functioning but unstable). An artificial intelligence vision algorithm may be configured to, via the image processing module 182, distinguish between each of these states by determining whether the LED is blinking, ON, or OFF. Moreover, in another example, additional states may be indicated by different blinking rates, paces, patterns, sequences, colors, or combinations thereof. In this embodiment, the artificial intelligence vision algorithm may be configured to detect the rate, pace, pattern, sequence, color, and/or combination thereof to distinguish between the different states of the respective functional elements 114. Additionally, or alternatively, the image processing module 182 may be configured to analyze functional elements 114 which may relate to a widget of a graphical user interface (GUI) of the object of interest 110 such as a button or text located on the screen of an object of interest 110. For example, in the case of screen/display analysis of the object of interest 110, the image processing module 182 may be configured to track sequences of messages that are shown on said screen/display of the object of interest 110 and conclude information pertaining to the status of setup or recovery process.

[000101] In certain embodiments, the image processing module 182 and/or the video tracking module 183 illustrated in Fig. 7 may utilize computer vision tools having tracking capabilities. For example, the image processing module 182 may be configured to identify complex objects of interest and/or functional elements. In another example, the video tracking module 183 may be configured to allow tracking of such complex objects of interest in sophisticated/challenging visioning conditions (e.g., poor lighting, poor resolution, and/or noisy conditions). These capabilities may be implemented by use of neural network tools. Optionally, the image processing module 182 may be configured to filter nonessential image data such that background images, for example the room in which the object of interest is located, may be removed. The above-mentioned techniques, alone or in combination, may enable the system of the TSC 160 to more accurately diagnose a technical solution to a problem pertaining to the object of interest. This way a multitude (e.g., thousands) of video streams can be analyzed by the image processing module 182 implementing the techniques disclosed herein to assist in the artificial-intelligence- based technical support session 1100.

[000102] In some embodiments, the image processing module 182 may be configured to enable the data processing unit 180p to conduct and draw conclusions from a root cause analysis using multiple images that were captured during the artificial-intelligence-based technical support session 1100 an agent-guided session, or via another process. For example, the image processing module 182 may be configured to employ root cause analysis which may include any one of the following features or multiple combinations thereof. In one example, multiple images may be captured and the image signals, such as first image signals 133s1 , may be analyzed according to a fixed scripted flow, conditionally based on user 120 feedback, and/or conditionally based on individual analysis of the image signals. In another example, the image processing module 182 may be configured to extract and visually analyze meaningful information from each of the images contained in the image signals. In another example, the image processing module 182 may be configured to consult an Al-based knowledge system stored in, for example, the memory unit 180m and/or the data structure 190 illustrated in Fig. 7. The Al-based knowledge system may provide a collection of analyzed information from all images. In another example, the image processing module 182 may be configured to communicate with the knowledge system to generate probable root cause based on aggregated information and a corresponding recovery guideline.

[000103] In one non-limiting example, several images related to a reported problem in the stability of internet access may be obtained by an image sensor 132 of a mobile communications device 130 and the corresponding image signals may be analyzed by the image processing module 182 illustrated in Fig. 7 during an artificial-intelligence-based technical support session 1100 illustrated in Fig. 1A. The several image signals may relate to various functional elements of an object of interest 110, such as a router, for example, LEDs (as displayed on a front panel of an object of interest), cables (as displayed on a rear panel of the object of interest), a PC screen showing information regarding a Wi-Fi setup of the object of interest, a wall socket including the cable connected to the object of interest. As discussed above, the functional elements need not be a tangible component of the object of interest. Various functional elements may be located in the same image sensor field of view as the object of interest or a different field of view than the object of interest. Additionally, the various functional elements may be located in the same image sensor field of view as the other functional elements or a different field of view than the other functional elements. Thus, the image processing module 182 may be configured to employ artificial intelligence to analyze all images and integrate the information to determine conclusive information regarding the root cause and determine recovery guidance.

[000104] In some embodiments, the operations may further comprise using the analysis of the first audio signals 135s1 to categorize subject matter of the first image signals 133s1. The image processing module 182, as illustrated in Fig. 8, may receive information from the speech processing module 184 corresponding to the analysis of the first audio signals 135s1 , as analyzed at Step 1141 in Fig. 4, in order to more efficiently categorize the subject matter of the first image signals 133s1 being analyzed by the image processing module 182. For example, the image processing module 182 may be configured to receive information from the speech processing module 184 pertaining to the specific type of issue the user is having with the object of interest being used in order to more efficiently conduct an analysis of the first image signals 133s1 .

[000105] Additionally, or alternatively, the operations may further comprise using the analysis of the first image signals 133s1 to interpret the first audio signals 135s1 . In certain embodiments, the speech processing module 184, as illustrated in Fig. 8, may be configured to receive information from the image processing module 182 corresponding to the analysis of the first image signals 133s1 , as analyzed at Step 1142 in Fig. 4, in order to more efficiently interpret the first audio signals 135s1 being analyzed by the speech processing module 184. For example, the speech processing module 184 may be configured to receive information from the image processing module 182 pertaining to the identity of the object of interest 110 being used in order to more efficiently conduct an analysis of the first audio signals 135s1 by utilizing the analysis of the first image signals 133s1 .

[000106] In some embodiments, the artificial-intelligence-based technical support operations may involve aggregating the analysis of the first audio signals and the first image signals. As used herein, aggregating may refer to both audio signals and image signals used to achieve a result or gain an understanding. For example, each of audio and video alone may be insufficient to gain a sufficient understanding of a problem or defect. However, when both are considered, a problem or defect may be identified. In another sense, aggregating may refer to the process of gathering data from various inputs and/or sources for use in achieving a goal. The data may be gathered from at least one data source and may be combined into a summary format for further data analysis. For example, aggregated information from various inputs and/or sources may be combined via data integration in which data from the various inputs and/or sources may be consolidated into a unified dataset in order to provide more concise and/or valuable information in a time efficient manner. Aggregated information from multiple data sources allows for more meaningful and richer information regarding a collective environment which would have been difficult or impossible to obtain if using only a single data source.

[000107] Referring back to Figs. 3-4, during Step 1143 of the artificial- intelligence-based technical support session 1100, as illustrated in Fig. 1A, the TSC 160 may be configured to aggregate information pertaining to the analysis of the first audio signals 135s1 , as analyzed at Step 1141 , and/or information pertaining to the analysis of the first image signals 133s1 , as analyzed at Step 1142. In some embodiments, during the data aggregation step, Step 1143, the data aggregation module 185, as illustrated in Fig. 7, may be configured and operable to receive information pertaining to the first image signals 133s1 as analyzed by the image processing module 182 and/or information pertaining to the first audio signals 135s1 as analyzed by the speech processing module 184.

[000108] The artificial-intelligence-based technical support session 1100 may aggregate information pertaining to the analysis of the first audio signals 135s1 , information pertaining to the analysis of the first image signals 133s1 , and/or additional information stored in, for example the memory unit 180m and/or the data structure 190 illustrated in Fig. 7. The aggregation of various types of data may occur simultaneously or discretely over a period of time or at varying intervals of time. It is to be understood, that where only image signals, such as the first image signals 133s1 , or audio signals, such as the first audio signals 135s1 , are received by the TSC 160, the artificial-intelligence-based technical support session 1100 may still employ an artificial intelligence algorithm to analyze the image signals or audio signals, and optionally additional data, and integrate the information to form a conclusive determination regarding the root cause of the issue for which support is sought. Optionally, analyzing the first audio signals 135s1 at Step 1141 , analyzing the first image signals 133s1 at Step 1142, and aggregating the analysis of the first audio signals 135s1 and the first image signals 133s1 at Step 1143 may occur in a singular process during the artificial-intelligence-based technical support session 1100.

[000109] In some embodiments, based on the aggregated analysis of the first image signals and the first audio signals, the artificial-intelligence-based technical support operations may involve accessing at least one data structure and/or database to identify an image capture instruction. Accessing a data structure, as disclosed herein, may include reading and/or writing information from/to the data structure. The term data structure may relate to a more advanced knowledge-base database from which options, such as a fixed set of prioritized operations, are selected based on some environment of usage conditions. The data structure may include an artificial intelligence-based system that has been trained with data of past cases, their conditions, and the optimal solution of each case.

[000110] The data structure may include one or more memory devices that store data and/or instructions. The data structure may utilize a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, other type of storage device or tangible or non-transitory computer readable medium, or any medium or mechanism for storing information. The data structure may include any a plurality of suitable data structures, ranging from small data structures hosted on a workstation to large data structures distributed among data centers. The data structure may also include any combination of one or more data structures controlled by memory controller devices (e.g., server(s), etc.) or software.

[000111] The data structure may include a database which may relate to any collection of data values and relationships among them. The data may be stored linearly, horizontally, hierarchically, relationally, non-relationally, uni-dimensionally, multidimensionally, operationally, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, or in any manner enabling data access. By way of non-limiting examples, data structures may include an array, an associative array, a linked list, a binary tree, a balanced tree, a heap, a stack, a queue, a set, a hash table, a record, a tagged union, ER model, and a graph. For example, a data structure may include an XML database, an RDBMS database, an SQL database or NoSQL alternatives for data storage/search such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Sole, Cassandra, Amazon DynamoDB, Scylla, HBase, and Neo4J. A data structure may be a component of the disclosed system or a remote computing component (e.g., a cloudbased data structure). Data in the data structure may be stored in contiguous or non-contiguous memory. Moreover, a data structure, as used herein, does not require information to be co-located. It may be distributed across multiple servers, for example, that may be owned or operated by the same or different entities. Thus, the term “data structure” as used herein in the singular is inclusive of plural data structures.

[000112] As shown in Figs. 7-8, the data processing unit 180p of the control unit 180 may be configured and operable to consult the data structure 190 such that audio signals analyzed by the speech processing module 184 and/or image signals analyzed by the image processing module 182, and/or the aggregate thereof, may be compared, via comparison module 189, against information stored in the data structure 190. The data structure 190 may serve as a source of information and an advanced knowledgebase configured to store reference data from which support options may be selected based on an environment of usage conditions. The data structure 190 may include a classification module 191 and a record module 192. The classification module 191 may be configured and operable to classify data structure records contained in the record module 192, and/or verify the classification determined for each data structure record contained in the record module 192.

[000113] Additionally, a machine learning process of the TSC 160 illustrated in Figs. 1A, employing any suitable state of the art machine learning algorithm having machine vision and/or deep learning capabilities, may be used to troubleshoot an object of interest 110 during the artificial-intelligence-based technical support session 1100. The machine learning process of the TSC 160 may be, for example, incorporated into the data structure 190 and/or the data processing unit 180p. In one non-limiting example, the data structure 190 may be an artificial intelligence system which may be trained with data of past support sessions, their conditions, and/or optimal solutions of past support sessions. For example, the data structure 190 may be configured and operable to log and analyze the users' interactions with the system during various support sessions to identify common user errors. In this way, a dynamic data structure 190 may be constructed such that deep learning algorithms may be used to learn the nature of an issue/defect encountered pertaining to a given object of interest 110 for which support is sought, the best working solutions based on previous related sessions, and construct data structure records cataloguing the successful solutions.

[000114] In some embodiments, the TSC 160 may be configured to maintain and utilize the data structure 190 to determine a best matched solution for resolving the remote user’s problems and may use a machine deep learning tool, such as the comparison module 189 illustrated in Fig. 8. For example, the comparison module 189 may be configured and operable to analyze data structure records of the data structure 190 relating to previously conducted support sessions, classify the data structure records of the data structure 190 according to the specific type of problem dealt with in various data structure records (e.g., LAN connectivity, wireless connectivity, bandwidth, data communication rates, etc.), identify keywords and/or objects/elements mentioned/acquired during the artificial-intelligence-based technical support session 1100, and rank/weigh each data structure record of the data structure 190 according to the number of times it was successfully used to resolve a problem of a particular type/classification. The comparison module 189 may then use the detected problems/defects to determine which past solutions from the data structure 190 may be presented to the user’s mobile communications device 130 as a semantic prompt 170 corresponding to the best working solution.

[000115] Thus, the data structure 190 may be referenced and/or utilized to analyze classified image data and/or audio data and to deliver a semantic prompt 170 to the user during the artificial-intelligence-based technical support session 1100 based on the “lessons” learned from past support sessions related to a certain class of problem. The semantic prompt 170 may be a machine-generated response related to the best working solution for a given object at interest which may indicate what the user is to do next in the artificial-intelligence-based technical support session 1100. For example, the semantic prompt 170 may be a question to the user with the intention to better understand the current status of the object at interest (e.g., “When was it last working properly?”; “Have you pressed the rest button?”; “It the top LED light blinking?”) and/or some instruction to the user to apply to the object at interest before moving on with the support session (e.g., “Please turn on the power button and confirm after you are done”; “Disconnect the Ethernet cable and confirm after you are done”).

[000116] After aggregating the first audio signals 135s1 and/or the first image signals 133s1 at Step 1143 in Fig. 3, the TSC 160 may be configured and operable to read and/or write information from/to the data structure 190 via the comparison module 189 of the data processing unit 180p illustrated in Fig. 8. At Step 1150 in Figs. 3-4, the comparison module 189 may compare the aggregated analysis of the first audio signals 135s1 analyzed by the speech processing module 184 and/or the first image signals 133s1 analyzed by the image processing module 182 to reference data in the data structure records of the data structure 190. For example, once the data processing unit 180p determines that the aggregated analysis of the newly acquired first audio signals 135s1 and/or first image signals 133s1 correlates to objects, issues, and/or defects as the reference data of the data structure records, the comparison module 189 of the TSC 160 may access the at least one data structure 190 and/or database in order to identify and/or generate one or more semantic prompts 170, such as an image capture instruction 175. The image capture instruction 175 may relate to instruction and/or guidance which may be provided to the remote user 120. [000117] In some embodiments, the artificial-intelligence-based technical support operations may involve presenting the image capture instruction to the mobile communications device via the at least one network. The image capture instruction may include a direction to alter a physical structure identified in the first image signals and to capture second image signals of an altered physical structure. As described herein, a direction to alter a physical structure may relate to any instruction which may prompt the user to change, modify, adjust, repair, and/or take any other action with respect to a device or product for which support is sought, or a functional element thereof. Additionally, or alternatively, the direction to alter a physical structure may relate to any instruction which may prompt the user to change, modify, adjust, repair, and/or take any other action with respect to a peripheral device or product of the primary device or product of interest, or a functional element thereof, which is functionally related to the device or product for which support is sought. The physical structure, as defined herein, may relate to any tangible components, such as a cord or LED, and/or any intangible component, such as a widget of a graphical user interface (GUI) displayed on a screen, to a device or product for which support is sought.

[000118] Referring back to the flow chart of Fig. 3 illustrating an exemplary method of the artificial-intelligence-based technical support session 1100 and the functional flow chart of Fig. 4 schematically illustrating the artificial-intelligence-based technical support session 1100, as illustrated in the network diagram of Fig. 1A, the TSC 160 may be configured to present the image capture instruction 175 at Step 1160 to the user’s mobile communications device 130. As illustrated in Fig. 8, the image capture instruction 175 determined by the data processing unit 180p may be transmitted to the user’s mobile communications device 130. The image capture instruction 175 may be transmitted to the mobile communications device 130 as image data 133, audio data 135, and/or text data via the I/O unit 180n illustrated in Fig. 7 over at least one network. Figs. 9A-9F illustrates certain non-limiting applications of the data processing unit 180p and sequential semantic prompts 170, such as the image capture instruction 175, displayed on the mobile communications device 130 during the artificial-intelligence-based technical support session 1100, as illustrated in the network diagram of Fig. 1A. [000119] Fig. 9A illustrates one example of the user 120 capturing an image of the object in interest 110 using the mobile communications device 130 during an artificial-intelligence-based technical support session 1100 wherein a semantic prompt 170 is displayed on the mobile communications device 130. In this example, the mobile communications device 130 may be configured to present to the user 120 the semantic prompt 170 as well as annotations/markers 139 superimposed onto the annotated object 111. The annotations/markers 139 superimposed onto the annotated object 111 may be automatically generated using artificial intelligence and may outline a digital representation of the object in interest 110 in real-time. Fig. 9B illustrates one example of various annotations and markers superimposed onto various functional elements of the annotated object 111 , as depicted on the display unit 131 of the mobile communications device 130. The various markers and annotations illustrated in this example include marker 1771 outlining an ethernet cable as noted in annotation 1781 , marker 1772 outlining a camera as noted in annotation 1782, marker 1773 outlining a battery as noted in annotation 1783, and marker 1774 outlining a base station as noted in annotation 1784. Fig. 9C illustrates one example the user 120 capturing an image of the object in interest 110 such that an annotated object 111 having real-time automated markings corresponding to functional element 114g of the object 110 is depicted on the display unit 131 of the mobile communications device 130. The annotated object 111 includes a marker 1775 highlighting a digital representation of the functional element 114g. In this example, the marker 1775 is identifying the location of an ethernet port (functional element 114g).

[000120] Figs. 9D-9F illustrate an image capture instruction 175 displayed on the mobile communications device 130 directing the user 120 to alter the physical structure of the object of interest 110. For example, in Figs. 9D-9E the image capture instruction 175 may instruct the user 120 to scan an ethernet cable, functional element 114h, (e.g., “scan the ethernet cable”) to be plugged into the object in interest 110. The image capture instruction 175 may be automatically presented to the user 120 in real-time with marker 1776 automatically superimposed onto a digital representation of functional element 114h. The image capture instruction 175 may further instruct the user 120 to plug the ethernet cable into the object of interest 110 (e.g., “plug the ethernet cable here”), as illustrated in Fig. 9F. The image capture instruction 175 may be automatically presented to the user 120 in real-time with marker 1775 highlighting a digital representation of functional element 114g.

[000121] In one non-limiting example, the user 120 may record the process of plugging the ethernet cable, functional element 114h, into an ethernet port of the object of interest 110, functional element 114g, such that image data acquired the mobile communications device 130 may be transmitted as second image signals 133s2 over the at least one network 140 to the TSC 160, as illustrated in Fig. 4 at Step 1106. Alternatively, the user 120 may plug the ethernet cable, functional element 114h, into functional element 114g of the object of interest 110 and then capture image data depicting the ethernet cable, functional element 114h, plugged into the object of interest 110 after the instruction have been followed by the user 120.

[000122] In other non-limiting examples, the image capture instruction 175 may, for example, prompt the user 120 to “switch the Audio and Video cables at the back of your TV,” “press the reset button for 5 seconds,” “type ‘reboot’ at your PC,” “press hard to lock your washing machine door,” or “fill at least half of the coffee machine water tank.” The semantic prompt 170, or image capture instruction 175, may be a visual and/or audible message. Optionally, an error notification may appear on the mobile communications device 130 such that a change color (e.g., from red to green) occurs when the user 120 correctly responds to the semantic prompt 170. It is to be understood that, in certain embodiments, an unsuccessful attempt to alter the physical structure of the object of interest 110 may constitute an alteration of the physical structure of the object of interest 110 as disclosed herein.

[000123] In some embodiments, the artificial-intelligence-based technical support operations may involve receiving from the mobile communications device second image signals corresponding to the altered physical structure via the at least one network. Second image signals corresponding to the altered physical structure may include the output of an image sensor capturing a change to the physical structure that occurred after the first image signals were captured. [000124] Referring back to the flow chart of Fig. 3 illustrating an exemplary method of the artificial-intelligence-based technical support session 1100 and the functional flow chart of Fig. 4 schematically illustrating the artificial-intelligence-based technical support session 1100 illustrated in the network diagram of Fig. 1A, image data captured by the user’s mobile communications device 130 at Step 1105 may be received by the TSC 160 as second image signals 133s2 at Step 1172. In certain embodiments, the second image signals 133s2 may include image data relating to a modification, adjustment, repair, and/or any other action taken by the user with respect to the object of interest, or a functional element thereof. Additionally, or alternatively, the second image signals 133s2 may include image data relating to a modification, adjustment, repair, and/or any other action taken by the user with respect a peripheral device or product of the object of interest, or a functional element thereof, which is functionally related to the object of interest for which support is sought.

[000125] The second image signals 133s2 received by the TSC 160 may represent a live video, a burst of several images, or any data that includes a sequence of images. In some embodiments, the second image signals 133s2 received by the TSC 160 may be a “real-time” video stream which closely approximates events as they are occurring. Such real-time video streams or “live” feeds may allow, for example, a user’s mobile communication device 130 to visually record an ongoing issue, installation, and/or repair at one location, and the TSC 160 to receive near simultaneous second image signals 133s2 pertaining to the ongoing issue, installation, or repair. The image data acquired by the image sensor 132 of the mobile communications device 130, as illustrated in Fig. 1A, may be transmitted as second image signals 133s2 over the at least one network 140 by wired or wireless transmission to the TSC 160. In certain embodiments, the second image signals 133s2 may be sent to and received by the TSC 160, via at least one network 140, over the same channel or channels as the first image signals 133s1.

[000126] In some embodiments, operations of the artificial-intelligence-based technical support session may involve, after presenting the image capture instruction, receiving from the mobile communications device second audio signals via the at least one network. The second audio signals may correspond to a status of the altered physical structure. Receiving the second audio signals may involve processes previously described for receiving the first audio signals.

[000127] In the artificial-intelligence-based technical support session 1100 illustrated in Fig. 4, second image signals 133s2 may be sent with second audio signals 135s2 at Step 1106. The process of transmitting the second audio signals 135s2 may involve processes previously discussed for sending the first audio signals 135s1 to the TSC 160 at Step 1101 in Fig. 4. For example, at Step 1106 of the artificial-intelligence-based technical support session 1100, the user may report to the TSC 160 how things are coming along or issues that the user is facing (e.g., “I cannot plug the cable into the green socket”). The second audio signals 135s2 may be transmitted to and received by the at least one processor of the TSC 160 along with the second image signals 133s2 in a single flow or in separate sub-flows. It is to be understood that the second audio signals 135s2 and/or the second image signals 133s2 may be transmitted to the TSC 160 without signal interruption relative to the transmission of the first audio signals 135s1 and/or the first image signals 133s1. Alternatively, the second audio signals 135s2 and/or second image signals 133s2 may be transmitted to the TSC 160 after a break in signal transmission relative to the transmission of the first audio signals 135s1 and/or the first image signals 133s1.

[000128] In some embodiments, the artificial-intelligence-based technical support operations may involve analyzing the captured second image signals using artificial intelligence. In a general sense, analyzing the captured second image signals using artificial intelligence may occur in a manner similar to the analysis of the first image signals previously discussed.

[000129] Referring back to the flow chart of Fig. 3 illustrating an exemplary method of the artificial-intelligence-based technical support session 1100 and the functional flow chart of Fig. 4 schematically illustrating the artificial-intelligence-based technical support session 1100 illustrated in the network diagram of Fig. 1A, the TSC 160 may be configured to analyze the second image signals 133s2 at Step 1182 received from the mobile communications device 130 at Step 1172 using artificial intelligence. At Step 1182, the second image signals 133s2 received by the TSC 160 from the remote user’s mobile communications device 130 at Step 1172 may be processed and/or analyzed by the control unit 180 of the TSC 160 using, for example, the image processing module 182 of the data processing unit 180p illustrated in Fig. 8. In some embodiments, the second image signals 133s2 analyzed using artificial intelligence at Step 1182 in Figs. 3-4 may be analyzed in a manner similar to the analysis of the first image signals 133s1 using artificial intelligence at Step 1142, as discussed above.

[000130] The following non-limiting examples are presented with reference to the functional flow chart of Fig. 4 together with the functional block diagram of Fig. 8. In one example, the image processing module 182 illustrated in Fig. 8 may be configured to handle and analyze a sequence of several second image signals 133s2. In another example, the image processing module 182 may be configured to identify the more relevant images among the sequence of second image signals 133s2 and compare various differences identified within the second image signals 133s2 and/or compare differences between the first image signals 133s1 and the second image signals 133s2. In another example, the image processing module 182 may be configured to adopt to various bandwidth levels. In another example, the image processing module 182 may be configured to provide feedback to the mobile communications device 130 as to the ending of the artificial-intelligence-based technical support session 1100. In another example, an algorithm, such as a deep learning algorithm, may be used by the image processing module 182 to detect a problematic object of interest in the first image signals 133s1 and to identify possible issues/defects causing the problem encountered by the remote user. In another example, the image processing module 182 may be configured and operable to conduct pixel analytics of the first image signals 133s1 and/or the second image signals 133s2. In a general sense, analyzing the captured second image signals 133s2 using pixel analytics may occur in a manner similar to the analysis of the first image signals 133s1 using pixel analytics previously discussed.

[000131] In another example, a sequence of images captured rapidly by the mobile communications device 130 as a burst of several second image signals 133s2 may be extracted from a video stream and may be processed and analyzed by the image processing module 182 illustrated in Fig. 8. For example, the image processing module 182 may be configured and operable to identify a change in the rate, pace, pattern, sequence, color, and/or combination thereof of the functional elements 114a-114d illustrated in Fig. 6, as compared to earlier obtained first image signals 133s1 to determine an updated status of the object of interest 110. Thus, the image processing module 182 may be configured to employ artificial intelligence in the analysis of the first image signals 133s1 and the second image signals 133s2, and integrate the information, to determine a conclusive decision regarding the root cause of the technical issue for which support is sought.

[000132] In some embodiments, based on the analysis of the second image signals, the artificial-intelligence-based technical support operations may involve determining a technical support resolution status. Determining a technical support resolution status, as used herein, may involve concluding whether the subject issue has been resolved or whether the issue has not been resolved. Determining the resolution status may alternatively involve concluding whether the issue is not resolvable or whether additional communication is needed to resolve the issue. The resolution status may be provided to a user seeking technical support as an indication that the issue has been resolved during the automated technical support session or that human intervention may be required to resolve the issue.

[000133] By way of example, after analyzing the captured second image signals 133s2 corresponding to the altered object of interest 110, the artificial-intelligence- based technical support session 1100 illustrated in Figs. 3-4, may use artificial intelligence at Step 1182, and the TSC 160 may be configured and operable to determine a technical support resolution status 179 at Step 1190. At Step 1190, the comparison module 189 may compare the analysis of the second image signals 133s2 analyzed by the image processing module 182 to reference data in the data structure records of the data structure 190, as illustrated in Fig. 8. In some embodiments, the comparison module 189 of the TSC 160 may access the at least one data structure 190 in order to determine whether the newly acquired second image signals 133s2, as compared to previously acquired first image signals 133s1 , contain similar resolution indications as the reference data of the data structure records. Thus, the comparison module 189 is configured and operable to analyze the outcome of the altering actions to determine the technical support resolution status 179. The technical support resolution status 179 determined at Step 1190 may then be transmitted to the mobile communications device 130 as image data, audio data, and/or text data via at least one network.

[000134] In some embodiments, the technical support resolution status may include an indication that a technical support issue is resolved. By way of example, in determining a technical support resolution status 179 at Step 1190 of the artificial- intelligence-based technical support session 1100 illustrated in Figs. 3-4, the data processing unit 180p of the TSC 160 may conclude that the technical issue pertaining to the object of interest 110 for which support is sought has been resolved. The TSC 160 may then present a technical support resolution status 179 to the mobile communications device 130 indicating that the issue has been resolved (e.g., “Great! All is working well” or “It is very likely that your device works okay now but we recommend that you double check in a couple of days”) and/or the TSC 160 may terminate the artificial-intelligence-based technical support session 1100.

[000135] Alternatively, the technical support resolution status may include an indication that a technical support issue is not resolved. By way of example, in determining a technical support resolution status 179 at Step 1190 of the artificial- intelligence-based technical support session 1100 illustrated in Figs. 3-4, the data processing unit 180p may conclude that the technical issue pertaining to the object of interest 110 for which support is sought has not been resolved. In the event that the technical support resolution status 179 includes an indication that the technical issue pertaining to the object of interest 110 for which support is sought has not been resolved, certain operations of the artificial-intelligence-based technical support session 1100 may be conducted iteratively in real time until the user’s problem is resolved.

[000136] In some embodiments, when the technical support resolution status includes an indication that a technical support issue has not been resolved, the artificial-intelligence-based technical support operations may involve sending a prompt to the mobile communications device seeking additional audio signals and/or additional image signals, analyzing following receipt and in an aggregated fashion, the additional audio signals and the additional image signals, performing an additional lookup in the at least one data structure to determine an associated remedial measure, and/or presenting the remedial measure to the mobile communications device.

[000137] By way of example, in the event that the technical support resolution status 179, illustrated in Fig. 4, includes an indication that the technical issue pertaining to the object of interest for which support is sought has not been resolved the TSC 160 may be configured to request that the user provide additional audio signals and/or additional image signals upon determining that a technical support issue has not been resolved at Step 1190. In a general sense, requesting the additional audio signals and/or additional image signals may occur in a manner similar to the requesting of the second image signals 133s2 and/or second audio signals 135s2 discussed herein.

[000138] Upon receiving additional audio signals and/or additional image signals from the user’s mobile communications device 130, the TSC 160 may be configured to analyze the additional audio signals and/or the additional image signals following receipt in an aggregated fashion. In a general sense, analyzing and aggregating the additional audio signals and/or additional image signals may occur in a manner similar to the analyzing and aggregating of first image signals 133s1 and/or first audio signals 135s1 previously discussed. Upon analyzing additional audio signals and/or additional image signals from the user’s mobile communications device 130, the TSC 160 may be configured to perform an additional lookup in the at least one data structure 190 to determine an associated remedial measure. In a general sense, performing the additional lookup in the at least one data structure 190 may occur in a manner similar to accessing at least one data structure 190 to identify a semantic prompt 170 and/or an image capture instruction 175 as discussed herein. Upon determining an associated remedial measure, the TSC 160 may be configured to present the remedial measure to the mobile communications device 130. In a general sense, presenting of the remedial measure may occur in a manner similar to presenting the image capture instructions 175 previously discussed.

[000139] In some embodiments, when the technical support resolution status includes an indication that a technical support issue has not been resolved, the artificial-intelligence-based technical support operations may involve linking the mobile communications device to a human agent for further assistance. By way of example, in the event that the technical support resolution status 179, illustrated in Fig. 4, includes an indication that the technical issue pertaining to the object of interest for which support is sought has not been resolved, the TSC 160 may present a technical support resolution status 179 to the user’s mobile communication device 130 including guidance on issues that were not resolved during the artificial- intelligence-based technical support session 1100 and/or linking the mobile communications device 130 device to a human technical support agent for further assistance. For example, the TSC 160 may indicate that a live service agent may be contacted to resolve the user’s problem (e.g., “Some issues were not resolved during this session. Let me connect you with a live agent for further assistance.”) or may indicate that the issue is irresolvable (e.g., “We figure that your product has a severe problem. Let me connect you with our online store for free replacement.”).

[000140] When the remote user successfully resolves the problem by following the instructions, which may be annotated/superimposed on the mobile communications device 130, the problem and/or solution may be stored in the record module 192 of the data structure 190 illustrated in Figs. 7-8. In some embodiments, the data structure 190 may serve as a cloud or other database record system. By storing various problems and/or solutions in the data structure 190, a database of working solutions may be gradually established. In certain embodiments, the database of working solutions stored in the data structure 190 may be used by the TSC 160 to more quickly and efficiently solve future problems. Alternatively, the data structure 190 may form an artificial intelligence, whereby the artificial intelligence of the data structure 190 solves the problem using the stored image data and/or audio data. For example, the data structure 190 may automatically identify issues pertaining to the object for which support is sought using stored image data and/or audio data and may identify appropriate augmented indicators to be presented to the remote user 120, such as the annotations/markers 139 illustrated in Fig. 1 B.

[000141] According to another embodiment, operations of the artificial- intelligence-based technical support session may involve, based on the aggregated analysis of the first audio signals and the first image signals, accessing the at least one data structure containing a plurality of semantic prompts; using the aggregated analysis of the first audio signals and the first image signals to select a first semantic prompt from the at least one data structure; presenting the first semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a first response to the first semantic prompt via the at least one network; analyzing the first response to the first semantic prompt; and based on the analysis of the first response, accessing the at least one data structure to identify the image capture instruction.

[000142] Fig. 10 is a flow chart illustrating an exemplary method for an artificial- intelligence-based technical support session 1200. In a general sense, the technical support system for performing operations of the artificial-intelligence-based technical support session 1200 may be configured in a manner similar to the system for performing operations of the artificial-intelligence-based technical support session 1100 illustrated in the network diagram of Fig. 1A. Additionally, in some embodiments, certain aspects of the technical support operations illustrated in the flow chart of Fig. 10 may be performed in a manner similar to the previously discussed technical support operations illustrated in the flow chart of Fig. 3. Fig. 11 is a simplified functional flow chart schematically illustrating the artificial-intelligence- based technical support session 1200, in accordance with some embodiments. In a general sense, certain aspects the artificial-intelligence-based technical support session 1200 illustrated in Fig. 11 may be similar to aspects of the artificial- intelligence-based technical support session 1100 illustrated in Fig. 4. The following examples are presented with reference to Figs. 10-11 together with the network diagram of Fig. 1A.

[000143] In one embodiment, the artificial-intelligence-based technical support operations may involve receiving over at least one network first audio and/or first image signals from a mobile communications device. In the artificial-intelligence- based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to, at Step 1231 , receive first audio signals 135s1 ' corresponding to the audio data captured by the audio sensor of the mobile communications device 130 at Step 1201. Additionally, the TSC 160 may be configured to, at Step 1232, receive first image signals 133s1 ' corresponding to the image data captured by the image sensor of the mobile communications device 130 at Step 1202. In some embodiments, the steps of presenting the first audio signals 135s1 1 at Step 1201 and/or the first image signals 133s1 1 at Step 1202 from the mobile communications device 130 to the TSC 160 may occur in a manner similar to the operations disclosed above at Steps 1101 and 1102 of Figs. 3-4, respectively. Additionally, in some embodiments, the steps of receiving the first audio signals 135s1 ' at Step 1241 and/or the first image signals 133s1 1 at Step 1242 may be similar to the operations disclosed above at Steps 1131 and 1132 of Figs. 3-4, respectively.

[000144] In one embodiment, the artificial-intelligence-based technical support operations may involve analyzing the first audio signals and/or the first image signals using artificial intelligence. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , once the first audio signals 135s1 1 are received by the TSC 160 from the mobile communications device 130 at Step 1231 , the first audio signals 135s1 ' may be processed and analyzed by the control unit 180 of the TSC 160 illustrated in Figs. 7-8 using, for example, the speech processing module 184 of the data processing unit 180p at Step 1241 of Figs. 10-11.

Additionally, or alternatively, once the first image signals 133s1 ' are received by the TSC 160 from the mobile communications device 130 at Step 1232, the first image signals 133s1' may be processed and/or analyzed by the control unit 180 of the TSC 160 illustrated in Figs. 7-8 using, for example, the image processing module 182 of the data processing unit 180p at Step 1242 of Figs. 10-11. In some embodiments, the steps of analyzing the first audio signals 135s1 ' at Step 1241 and/or the steps of analyzing the first image signals 133s1 ' at Step 1242 may occur in a similar manner to the operations disclosed above at Steps 1141 and 1142 of Figs. 3-4, respectively.

[000145] In another embodiment, the artificial-intelligence-based technical support operations may involve aggregating the analysis of the first audio signals and the first image signals. In a general sense, aggregating the analysis of the first audio signals and the first image signals may occur in a manner similar to aggregating the analysis of the first audio signals and the first image signals previously discussed. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be further configured to aggregate information, at Step 1243, pertaining to the analysis of the first audio signals 135s1 ', as analyzed by the speech processing module 184 at Step 1241 , and/or information pertaining to the analysis of the first image signals 133s1 ', as analyzed by the image processing module 182 at Step 1242, using the data aggregation module 185 illustrated in Figs. 7-8. In some embodiments, aggregating the analysis of the first audio signals 135s1 1 and/or the first image signals 133s1 1 at Step 1243 of Figs. 10- 11 may occur in a similar manner to the operations disclosed above at Step 1143 of Figs. 3-4.

[000146] In another embodiment, the artificial-intelligence-based technical support operations may involve accessing the at least one data structure containing a plurality of semantic prompts. Accessing the at least one data structure containing a plurality of semantic prompts may be based on the aggregated analysis of the first audio signals and/or the first image signals. In a general sense, accessing the at least one data structure containing a plurality of semantic prompts based on the aggregated analysis of the first audio signals and/or the first image signals may occur in a manner similar to accessing the at least one data structure to identify an image capture instruction based on the aggregated analysis of the first audio signals and/or the first image signals previously discussed.

[000147] In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to access, at Step 1251 a, the at least one data structure 190 illustrated in Figs. 7-8 containing a plurality of semantic prompts 170 based on the aggregated analysis of the first image signals 133s1 1 and the first audio signals 135s1 1 at Step 1243. In some embodiments, the comparison module 189 may use detected problems/defects in the first image signals 133s1 1 and/or the first audio signals 135s1 1 to determine which past solutions from the data structure 190 may be presented to the mobile communications device 130 as a semantic prompt 170 corresponding to a best working solution. For example, the comparison module 189 may be used to analyze classified image data and/or audio data in the data structure 190 to deliver at least one semantic prompt 170 to the mobile communications device 130 during the artificial-intelligence-based technical support session 1200 based on the “lessons” learned from past support sessions related to a certain class of problem.

[000148] In other embodiments, during the technical support session 1100 illustrated in Fig. 8, the data processing unit 180p may be configured and operable to access at least one data structure 190 and/or database in order to identify and/or generate one or more semantic prompts 170 which may be presented to the mobile communications device 130. The semantic prompts 170 may include a question presented as synthesized speech. Additionally, or alternatively, the semantic prompts 170 may include a question presented in text form. The semantic prompts 170 may seek information about the technical issue and/or seek information about a change occurring after the alteration of the physical structure of the object of interest 110.

[000149] In another embodiment, the artificial-intelligence-based technical support operations may involve using the aggregated analysis of the first audio signals and the first image signals to select a first semantic prompt from the at least one data structure. The first semantic prompt may seek information about the technical issue for which support is sought. In a general sense, selecting the first semantic prompt from the at least one data structure using the aggregated analysis of the first audio signals and the first image signals to select a first semantic prompt may occur in a manner similar to the selecting of an image capture instruction using the aggregated analysis of the first audio signals and the first image signals previously discussed.

[000150] In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to use the aggregated analysis of the first audio signals 135s1 1 and/or the first image signals 133s1 1 to select a first semantic prompt 171a from the at least one data structure 190, at Step 1252. The aggregated information from the first audio signals 135s1 ' and the first image signals 133s1 ' may enable the TSC 160 to select the most relevant information or guidance to convey back to the user 120 as the first semantic prompt 171a. The first semantic prompt 171a may seek information from the user 120 about the technical issue for which support is sought during the artificial-intelligence-based technical support session 1200. For example, the first semantic prompt 171a may be a prompt for a response from the user 120 related to the best working solution of the object at interest 110 which may indicate what the user 120 is to do next in the artificial-intelligence-based technical support session 1200.

[000151] In another embodiment, the artificial-intelligence-based technical support operations may involve presenting the first semantic prompt to the mobile communications device via the at least one network. In a general sense, presenting the first semantic prompt may occur in a manner similar to presenting the image capture instruction previously discussed. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , after the first semantic prompt 171a is selected from the at least one data structure 190, the TSC 160 may be configured to present the first semantic prompt 171a to the mobile communications device 130 via the at least one network 140 at Step 1253a. As depicted in Fig. 8, a semantic prompt 170, such as the first semantic prompt 171a, may be transmitted to the mobile communications device 130 as image data, audio data, and/or text data over at least one network.

[000152] In some embodiments, the first semantic prompt 171a illustrated in Fig. 11 may relate to a question presented in text form and/or a question presented as synthesized speech. For example, the first semantic prompt 171a may be a question, presented in text form and/or synthesized speech, to the user’s mobile communications device 130 with the intention of better understanding the current status of the object at interest (e.g., “When was it last working properly?”; “Have you pressed the reset button?”; “It the top LED light blinking?”) and/or some instruction, presented in text form and/or synthesized speech, to the user’s mobile communications device 130 to apply to the object at interest before moving on with the artificial-intelligence-based technical support session 1200 (e.g., “Please turn on the power button and confirm after you are done”; “Disconnect the Ethernet cable and confirm after you are done”).

[000153] In another non-limiting example, during the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the user may communicate to the TSC 160 via first audio signals 135s1 ' received by the TSC 160 at Step 1231 that the Internet connection is down and via first image signals 133s1 1 identify the object in interest and relevant functional elements thereof, such as a router front panel. The TSC 160 may be configured to analyze the first image signals 133sT and to identify the type as a DSL router. Combining with the user verbal statement contained in the first audio signals 135s1 1 with the first image signals 133s1 the TSC 160 may conclude that the DSL cable is not connected properly to a wall socket. The generated first semantic prompt 171a in this example might be “please verify that you your DSL cable is connected to the phone wall socket and confirm back.”

[000154] In another embodiment, the artificial-intelligence-based technical support operations may involve receiving from the mobile communications device a first response to the first semantic prompt via the at least one network. In a general sense, receiving from the mobile communications device a first response to the first semantic prompt may occur in a manner similar to receiving the image signals and/or receiving the audio signals previously discussed. At step 1254a of the artificial- intelligence-based technical support session 1200 illustrated in Figs. 10, the TSC 160 may be configured to receive a first response 172a to the first semantic prompt 171a that is presented by the user 120 at Step 1203. In some embodiments, the steps of receiving the first response 172a may be similar to the steps of receiving the first audio signals 135s1 ' and/or the first image signals 133s1 ', as disclosed above at Steps 1231 and 1232, respectively.

[000155] In another embodiment, the artificial-intelligence-based technical support operations may involve analyzing the first response to the first semantic prompt. In a general sense, analyzing the first response to the first semantic prompt may occur in a manner similar to analyzing the audio signals and/or the image signals previously discussed. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to, upon receiving from the mobile communications device 130 a first response 172a to the first semantic prompt 171a via the at least one network 140 at Step 1254a, analyze the first response 172a to the first semantic prompt 171a, at Step 1255a. In some embodiments, the step of analyzing the first response 172a may be similar to the steps of analyzing the first audio signals 135s1 1 and/or the first image signals 133s1 as disclosed above at Steps 1241 and 1242, respectively.

[000156] In another embodiment, the artificial-intelligence-based technical support operations may involve, based on the analysis of the first response, accessing the at least one data structure to identify the image capture instruction. In a general sense, accessing the at least one data structure to identify the image capture instruction based on the analysis of the first response may occur in a manner similar to accessing at least one data structure to identify an image capture instruction based on the aggregated analysis of the first image signals and the first audio signals previously discussed.

[000157] In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to access, based on the analysis of the first response 172a at step 1255a, the at least one data structure 190 to identify the image capture instruction 175', at Step 1256. At Step 1256, the comparison module 189 may compare the analysis of the first response 172a to reference data in the data structure records of the data structure 190 in order to identify and/or generate one or more semantic prompts 170, such as an image capture instruction 175'. The image capture instruction 175' may relate to instruction and/or guidance which may be provided to the remote user 120. In some embodiments, the step of accessing the at least one data structure 190 to identify the image capture instruction 175, at Step 1256, may be similar to the steps of accessing the at least one data structure 190 to identify the image capture instruction 175', as disclosed above at Step 1150. For example, once the data processing unit 180p determines that analysis of the first response 172a to the first semantic prompt 171 a contains a similar response and/or objects, issues, or defects as the reference data of the data structure records, the comparison module 189 of the TSC 160 may access at least one data structure 190 and/or database to identify the image capture instruction 175' to be presented to the user 120.

[000158] In another embodiment, the artificial-intelligence-based technical support operations may involve presenting the image capture instruction to the mobile communications device via the at least one network, the image capture instruction including a direction to alter a physical structure identified in the first image signals and to capture second image signals of an altered physical structure. In a general sense, presenting the image capture instruction to the mobile communications device via the at least one network may occur in a manner similar to presenting the image capture instruction to the mobile communications device via the at least one network previously discussed. Additionally, the image capture instruction may be similar to the image capture instruction previously discussed.

[000159] In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to, upon accessing and identifying the proper image capture instruction 175' to be presented to the user 120 at Step 1256, present the image capture instruction 175' to the user 120 via the at least one network 140 at Step 1260. In some embodiments, the step of presenting the image capture instructions 175' at Step 1260 may be similar to the step of presenting the image capture instructions 175', as disclosed above at Step 1160.

[000160] In another embodiment, the artificial-intelligence-based technical support operations may involve receiving from the mobile communications device second image signals via the at least one network. The second image signals may correspond to the altered physical structure. In a general sense, receiving from the mobile communications device second image signals may occur in a manner similar to receiving from the mobile communications device second image signals previously discussed. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , in response to the image capture instructions 175' presented at Step 1260, the user may respond to the image capture instructions 175' by presenting second image signals 133s2' corresponding to the input optical signals of the altered object of interest as captured by the image sensor of the mobile communications device 130 at Step 1206 which are then received by the TSC 160 at Step 1272. In some embodiments, the step of receiving second image signals 133s2' from the mobile communications device 130 at Step 1272 may be similar to the step of receiving second image signals 133s2 from the mobile communications device 130 at Step 1172 [000161] In another embodiment, the artificial-intelligence-based technical support operations may involve analyzing the captured second image signals using artificial intelligence. In a general sense, analyzing the captured second image signals using artificial intelligence may occur in a manner similar to analyzing the captured second image signals using artificial intelligence previously discussed. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. IQ- 11 , the second image signals 133s2' received by the TSC 160 from remote user 120 at Step 1272 may be processed and/or analyzed by the control unit 180 of the TSC 160 using, for example, the image processing module 182 of the data processing unit 180p at Step 1282. In some embodiments, the receiving of the second image signals 133s2' at Step 1272 may be similar to the receiving of the second image signals 133s2' at Step 1172, as discussed above. Upon receiving the second image signals 133s2' at Step 1272, TSC 160 may be configured to analyze the second image signals 133s2' at Step 1282 received from the mobile communications device 130 using artificial intelligence. In some embodiments, the analysis of the second image signals 133s2' using artificial intelligence at Step 1282 may be similar to the analysis of the second image signals 133s2' using artificial intelligence, as discussed above at Step 1182.

[000162] According to another embodiment, the artificial-intelligence-based technical support operations may involve based on the analysis of the second image signals, accessing the at least one data structure containing the plurality of semantic prompts; sing the second image signals to select a second semantic prompt from the at least one data structure; presenting the second semantic prompt to the mobile communications device via the at least one network; receiving from the mobile communications device a second response to the second semantic prompt via the at least one network; analyzing the second response to the second semantic prompt; and based on the analysis of the second response, determining the technical support resolution status.

[000163] The following examples are presented with reference to the flow chart depicting an exemplary method for an artificial-intelligence-based technical support session 1200 illustrated in Fig. 10 and the simplified functional flow chart schematically depicting the artificial-intelligence-based technical support session 1200, in accordance with some embodiments, illustrated in Fig. 11 , together with the network diagram of Fig. 1A.

[000164] In some embodiments, the artificial-intelligence-based technical support operations may involve accessing the at least one data structure to retrieve from the at least one data structure a second semantic prompt. The second semantic prompt may seek information about a change occurring after the alteration of the physical structure. Accessing the at least one data structure containing the plurality of semantic prompts may be based on the analysis of the second image signals such that the second image signals are used to select a second semantic prompt from the at least one data structure. In a general sense, accessing the at least one data structure to retrieve from the at least one data structure a second semantic prompt may occur in a manner similar to accessing the at least one data structure to identify an image capture instruction and/or accessing at least one data structure containing a plurality of semantic prompts previously discussed. Additionally, using the aggregated analysis of the second audio signals and/or the first image signals to select a second semantic prompt may occur in a manner similar to using the aggregated analysis of the first audio signals and the first image signals to select a first semantic prompt.

[000165] In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to access, at Step 1251 b, the at least one data structure 190 illustrated in Fig. 8 containing the plurality of semantic prompts 170. Accessing the at least one data structure 190 containing the plurality of semantic prompts 170 at Step 1251 b may be based on the analysis of the second image signals 133s2' at Step 1282 and/or an analysis of the second audio signals 135s2'. In certain embodiments, the comparison module 189 may use the detected problems/defects, or adjustments made to the detected problems/defects, as contained in the second image signals 133s2' and/or the second audio signals 135s2‘, to determine which past solutions from the data structure 190 may be presented to the user 120 as a second semantic prompt 171 b, corresponding to a best working solution. In some embodiments, the steps of accessing the at least one data structure at Step 1251b based on the analysis of the second image signals 133s2' and/or the second audio signals 135s2' may be similar to the steps of accessing the at least one data structure at Step 1251a based on the analysis of the first audio signals 135s1 1 and/or the first image signals 133s1 as disclosed above at Steps 1241 and 1242, respectively, and optionally at Step 1243.

[000166] In some embodiments, the artificial-intelligence-based technical support operations may involve presenting the second semantic prompt to the mobile communications device via the at least one network. In a general sense, presenting the second semantic prompt may occur in a manner similar to presenting the first semantic prompt previously discussed. In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , once the second semantic prompt 171 b is selected from the at least one data structure 190, the TSC 160 may be configured to present the second semantic prompt 171b to the mobile communications device 130 via the at least one network 140 at Step 1253b.

[000167] In some embodiments, the artificial-intelligence-based technical support operations may involve receiving from the mobile communications device a second response to the second semantic prompt via the at least one network. In a general sense, receiving from the mobile communications device a second response to the second semantic prompt may occur in a manner similar to receiving the first response to the first semantic prompt previously discussed. At step 1254b of the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10, the TSC 160 may be configured to receive a second response 172b to the second semantic prompt 171b that is presented by the user’s mobile communications device 130 at Step 1207. In some embodiments, the steps of receiving the second response 172b to the second semantic prompt 171 b may be similar to the steps of receiving the first response 172a to the first semantic prompt 171a, as disclosed above at Step 1254a.

[000168] In another embodiment, the artificial-intelligence-based technical support operations may involve analyzing the second response to the second semantic prompt. In a general sense, analyzing the second response to the second semantic prompt may occur in a manner similar to analyzing the analyzing the first response to the first semantic prompt previously discussed. In the artificial- intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured to, upon receiving from the mobile communications device 130 a second response 172b to the second semantic prompt 171b via the at least one network 140 at Step 1254a, analyze the second response 172b to the second semantic prompt 171b, at Step 1255b. In some embodiments, the steps of analyzing the second response 172b to the second semantic prompt 171 b may occur in a manner similar to the step of analyzing the first response 172a to the first semantic prompt 171a, as disclosed above at Step 1255a.

[000169] In another embodiment, the artificial-intelligence-based technical support operations may involve, based on the analysis of the second response, determining a technical support resolution status. In a general sense, determining a technical support resolution status based on the analysis of the second response may occur in a manner similar to determining a technical support resolution status based on the analysis of the second image signals previously discussed.

[000170] In the artificial-intelligence-based technical support session 1200 illustrated in Figs. 10-11 , the TSC 160 may be configured and operable to, after analyzing the second response 172b to the second semantic prompt 171 b at Step 1255b, determine a technical support resolution status 179' at Step 1290. In some embodiments, the step of determining a technical support resolution status 179' at Step 1290 may be similar to the step of determining a technical support resolution status 179' as disclosed above at Step 1190. Additionally, at Step 1290, the comparison module 189 illustrated in Fig. 8 may access the at least one data structure 190 in order to determine whether the second response 172b to the second semantic prompt 171b contains similar resolution indications as the reference data of the data structure records or whether the second response 172b to the second semantic prompt 171b contains information indicating that additional support is required. In the event that the second response 172b includes information indicating that additional support is required, certain operations of the artificial-intelligence- based technical support session 1200 may be conducted iteratively in real time until the user's problem is resolved. Thus, the comparison module 189 may be configured and operable to analyze the information contained in the second response 172b to the second semantic prompt 171 b and/or the altering actions identified in the second image signals 133s2' relating to a modification, adjustment, repair, and/or any other action taken by the user 120 with respect to the object of interest 110, or a functional element 114 thereof, to determine the technical support resolution status 179'.

[000171] Accordingly, the systems, methods, and non-transitory computer readable medium capable of performing artificial-intelligence-based technical support operations disclosed herein may enable a user to proactively diagnose faulty item/equipment for increasing productivity and efficiency, and to resolve issues faster based on a maintained pool of past working solutions. The user’s mobile communications device is thereby harnessed to conduct technical support sessions and improve customer satisfaction, decrease technician dispatch rates for resolving user's problems, substantially improve the support session resolution rates, and decrease the average handling time, as well as address numerous other challenges related to traditional customers support models, including increasingly complex customer needs, communication gaps, diagnosis challenges, limited problem solving rates, and customer dissatisfaction and frustration.

[000172] As described hereinabove and shown in the associated figures, the present disclosure provides support session techniques, systems, and methods, for expeditiously identifying product defects/other issues and corresponding working solutions for resolving problems encountered by remote users. While particular embodiments of the disclosure have been described, it will be understood, however, that the disclosure is not limited thereto, since modifications may be made by those skilled in the art, particularly in light of the foregoing teachings. As will be appreciated by the skilled person, the disclosure can be carried out in a great variety of ways, employing more than one technique from those described above or below, all without exceeding the scope of the claims.

[000173] Aspects of the present disclosure may relate to techniques for performing remote artificial intelligence-assisted electronic warranty verification operations. More specifically, aspects of the present disclosure may relate to methods, systems, and/or non-transitory computer readable medium capable of performing remote artificial intelligence-assisted electronic warranty verification operations. The term “warranty,” as used herein, should be expansively construed to cover any type of guarantee that a supplier, manufacturer, or similar party may make regarding the condition of a product to an entity, such as an end-user or customer of the product. A warranty may relate to an express warranty which guarantees that a product will meet certain conditions of quality and performance, an implied warranty which guarantees that the product will function as designed, an extended warranty which is a service contract that relates to product repair and/or maintenance beyond or in addition to a manufacturer's warranty, or any other type of guarantee that a product will meet certain specifications. If the product does not meet certain specifications, for example if the product is defective, the end-user or purchaser of the product may seek to have the supplier, manufacturer, or similar party repair, replace, or otherwise correct the problem. Warranty coverage for a product may be included at the time of purchase or contingent upon registration of the product. Certain warranty coverage exceptions may apply, and not every product defect may be covered under a warranty. Additionally, the terms and conditions of the warranty may depend on the type of warranty covering the product.

[000174] In the disclosed embodiments, the term “electronic warranty verification” may relate to any technique and/or operation in which the truth, accuracy, scope, eligibility, and/or validity of a product warranty may be verified, validated, confirmed, or otherwise analyzed, in view of information provided by an individual, such as a consumer or the end-user of a product. Electronic warranty verification may be sought by the end-user of a product from a remote location such as the end-user’s home, office, or any other remote site via any mobile communications device including a personal computer, a wearable computer, a tablet, a smartphone, or any other electronic computing device having data processing capabilities. Electronic warranty verification may be provided by a warranty service center and may utilize a remote customer service agent, artificial intelligence, for example an automated customer service assistant, and/or a combination thereof. It is to be understood that a warranty service center, as used herein, is not limited to a single warranty service center, and may encompass multiple warranty service centers, individuals or groups of individuals in different geographic locations. Optionally, the term warranty service center may also encompass a fully virtual service system in which warranty verification operations may be performed in an automated fashion.

[000175] As used herein, the term “remote artificial intelligence-assisted electronic warranty verification” may relate to any electronic warranty verification service system and/or operations utilizing artificial intelligence techniques to establish a self-service mechanism in which warranty information may be automatically analyzed by a warranty service center to determine an indication of warranty coverage which may be provided to an entity, such as an end-user, in an automated fashion without (or with reduced) human intervention. For example, remote artificial intelligence-assisted electronic warranty verification operations may relate to any self-service session in which the end-user may obtain an indication of warranty coverage with respect to any product of interest including consumer goods (such as electronic products, software which may be utilizable therewith, furniture, or appliances), industrial goods, or any other article or substance that is manufactured, designed, or refined for sale, which may be covered under a warranty.

[000176] The term “artificial intelligence” may refer, for example, to the simulation of human intelligence in machines or processors that exhibit traits associated with a human mind such as learning and problem-solving. Artificial intelligence, machine learning, deep learning, or neural network processing techniques may enable the automatic learning through absorption of huge amounts of unstructured data such as text, audio, images, or videos and user preferences analyzed over a period of time such as through statistical computation and analysis. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recursive neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and/or any other algorithm in which a machine or processor takes inputs and outputs simultaneously in order to “learn” the data and produce outputs when given new inputs. As used herein, artificial intelligence may relate to machine learning algorithms, also referred to as machine learning models, which may be trained using training examples, for example in the cases described below involving image recognition and processing.

[000177] A trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recursive neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples. In some examples, engineers, scientists, processes, and machines that train machine learning algorithms may further use validation examples and/or test examples. For example, validation examples and/or test examples may include example inputs together with the desired outputs corresponding to the example inputs, a trained machine learning algorithm and/or an intermediately trained machine learning algorithm may be used to estimate outputs for the example inputs of the validation examples and/or test examples, the estimated outputs may be compared to the corresponding desired outputs, and the trained machine learning algorithm and/or the intermediately trained machine learning algorithm may be evaluated based on a result of the comparison. In some examples, a machine learning algorithm may have parameters and hyper parameters, where the hyper parameters are set manually by a person or automatically by a process external to the machine learning algorithm (such as a hyper parameter search algorithm), and the parameters of the machine learning algorithm are set by the machine learning algorithm according to the training examples. In some implementations, the hyper-parameters are set according to the training examples and the validation examples, and the parameters are set according to the training examples and the selected hyper-parameters.

[000178] While artificial intelligence may be utilized during certain remote artificial intelligence-assisted electronic warranty verification operations during an electronic warranty verification session, it is to be understood that certain operations may be executed with or without the use of artificial intelligence. Moreover, the techniques and/or operations disclosed herein may be implemented as automatic applications and/or processes, which may employ certain artificial intelligence analysis techniques and/or operations disclosed herein, that may be used by endusers to verify their warranty. For example, a remote artificial intelligence-assisted electronic warranty verification session may include an interactive application on a mobile communications device enabling an entity, such as an end-user of a product seeking warranty coverage information, to obtain certain terms and situations in which repairs or exchanges may be warranted if a recently purchased product does not function as originally described or intended.

[000179] Turning to the figures, Fig. 12 depicts a simplified network diagram illustrating exemplary communications during a remote artificial intelligence-assisted electronic warranty verification session 2100. More specifically, the simplified network diagram illustrates exemplary network communications between an entity 220 (e.g., end-user) seeking warranty verification with respect to a specific product 210 via a mobile communications device 230 (e.g., cell phone), a warranty service center (WSC) 50 capable of automatically providing electronic warranty verification with respect to the product 210 using artificial intelligence, and a supplier 280 of the product 210 during a remote artificial intelligence-assisted electronic warranty verification session 2100. Network communications between the entity 220, the WSC 250, and the supplier 280 during the remote artificial intelligence-assisted electronic warranty verification session 2100 may be facilitated by at least one server configured to collect and/or send information across the at least one network 240. The at least one server may be implemented as part of the WSC 250, such as remote server 241 , and/or in the at least one network 240 in a server farm or in a cloud computing infrastructure, such as remote server 242. [000180] The WSC 250 may include a control system 260 that is operably connected to a universal data structure 270 and at least one server and may be configured and operable to perform remote artificial intelligence-assisted electronic warranty verification operations with respect to a number of different products, for example product 210, in view of information obtained from the entity 220 seeking warranty verification and in view of warranty information obtained from a warranty data structure 290 of the supplier 280. For example, the WSC 250 may be configured to send and/or receive information pertaining to the specific product 210 for which warranty verification is sought to and/or from the mobile communications device 230, via at least one network 240. Additionally, the WSC 250 may be configured to send and/or receive information pertaining to the product 210 for which warranty verification is sought to and/or from the supplier 280, via at least one network 240. With the information obtained from the entity 220 and the supplier 280, the WSC 250 may be configured to determine and provide an indication of warranty coverage in an automated fashion without human intervention.

[000181] According to one embodiment, a system comprising at least one processor may be configured to perform the remote artificial intelligence-assisted electronic warranty verification operations disclosed herein. The term “processor,” as used herein may refer to any physical device or group of devices having electric circuitry that performs a logic operation on input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including an application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), a central processing unit (CPA), a visual processing unit (VPU), an image signal processor (ISR), server, virtual server, or any other circuits suitable for executing instructions or performing logic operations. The instructions executed by at least one processor may, for example, be pre-loaded into a memory integrated with or embedded into a controller or may be stored in a separate memory. Moreover, the at least one processor may include more than one processor. Each processor may have a similar construction, or the processors may be of differing constructions that are electrically connected or disconnected from each other. For example, the processors may be separate circuits or integrated in a single circuit. When more than one processor is used, the processors may be configured to operate independently or collaboratively. The processors may be coupled electrically, magnetically, optically, acoustically, mechanically or by any other means that permit them to interact.

[000182] Fig. 13 is a block diagram illustrating exemplary components of the control system 260 of the WSC 250 illustrated in Fig. 12. The control system 260 may include at least one processor, such as a neural network processor, and may be configured to perform remote artificial intelligence-assisted electronic warranty verification operations, in accordance with at least one embodiment of the present disclosure. The control system 260 may include a control unit 261 , a memory unit 266, and an input/output unit (I/O unit) 68. The control unit 261 may be configured and operable to process and analyze data, such as image data and/or text data, received from the various components of the system illustrated in Fig. 12 including the mobile communications device 230, the universal data structure 270, and/or the supplier 280. The memory unit 266 may be configured as a non-transitory computer readable medium and/or any form of computer readable media capable of storing computer instructions and/or application programs and/or data capable of controlling the control system 260, and various components thereof, and may also store one or more databases. The I/O unit 268 may be configured and operable to send and/or receive data over at least one network and/or to at least one server.

[000183] In at least one embodiment, the control unit 261 may include a first processing unit 262 and a second processing unit 264. The first processing unit 262 may include an image processing module 2161 configured and operable to process and analyze image data received from a mobile communications device, an OCR module 2162 configured and operable to identify characters within image data, an image analysis module 2163 configured and operable to analyze data accessible to the control system 260 using artificial intelligence, a comparison module 2164 configured and operable to compare data from at least one database and/or data structure containing stored reference data against newly acquired image data, a user interaction processing module 2165 configured and operable to communicate with the user of a mobile communications device via an interactive application, and a warranty determination module 2166 configured to determine and indication of warranty coverage. The second processing unit 264 may include an external data access module 2167 configured and operable to access data maintained by the supplier, for example data stored in the warranty data structure 290 of the supplier 280 illustrated in Fig. 12, and an external data processing module 2168 configured and operable to process and analyze data accessed by the external data access module 2167.

[000184] According to another embodiment of the present disclosure, a non- transitory computer readable medium may include instructions for performing certain operations, for example the remote artificial intelligence-assisted electronic warranty verification operations disclosed herein. As used herein, the term “non-transitory computer readable medium” should be expansively construed to cover any medium capable of storing data in any memory in a way that may be read by any computing device having at least one processor to carry out operations, methods, or any other instructions stored in the memory. Moreover, the term “computer readable medium” may refer to multiple structures, such as a plurality of computer readable mediums and/or memory devices. A memory device may include a Random-Access Memory (RAM), a Read-Only Memory (ROM), a hard disk, an optical disk, a magnetic medium, a flash memory, other permanent, fixed, volatile or non-volatile memory, or any other mechanism capable of storing instructions. The memory device may include one or more separate storage devices collocated or disbursed, capable of storing data structures, instructions, or any other data. The memory device may further include a memory portion containing instructions for the processor to execute. The memory device may also be used as a working scratch pad for the processors or as a temporary storage. Instructions contained on the non-transitory computer readable medium, when executed by at least one processor, may cause the at least one processor to carry out a method for performing one or more features or methods of the disclosed embodiments.

[000185] In some embodiments, the non-transitory computer readable medium may, for example, be implemented as hardware, firmware, software, or any combination thereof. The software may preferably be implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine including any suitable architecture. For example, the machine may be implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described in this disclosure may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. A non-transitory computer readable medium may be any computer readable medium except for a transitory propagating signal.

[000186] Referring back to Figs. 12-13, operations of the remote artificial intelligence-assisted electronic warranty verification session 2100 may relate to instructions included on at least one non-transitory computer readable medium. The instructions may be readable by at least one processor of the WSC 250 such that some or all of the remote artificial intelligence-assisted electronic warranty verification operations disclosed herein may be performed by at least one processor of the WSC 250. In one non-limiting embodiment, the instructions included on a non-transitory computer readable medium may be executed by the control unit 261 such that at least one processor of the control unit 261 performs certain operations of the remote artificial intelligence-assisted electronic warranty verification session 2100. It is to be understood, as noted above, that the non-transitory computer readable medium is not limited to such an implementation.

[000187] Some disclosed embodiments may involve transmitting an instruction to an entity to capture at least one product image of a specific product. As described herein, an instruction to an entity to capture at least one product image of a specific product may relate to any command, request, and/or direction prompting the entity, such as an end-user or owner of a specific product for which warranty verification is sought, to capture, or otherwise obtain, an image of a product. The product image may relate to a single image, a burst of images, a screenshot, and/or a video and may include an image of the entire product for which warranty verification is sought, an image of a portion of the product, an image of the manufacturer’s sticker on the product, and/or an image of any other identifying characteristics associated with the specific product for which warranty verification is sought. As described herein, a specific product may relate to any product including consumer goods (such as electronic products, software which may be utilizable therewith, furniture, or appliances), industrial goods, or any other article or substance that is manufactured, designed, or refined for sale, which may be covered under a warranty.

[000188] In some embodiments, the instruction to capture at least one product image of a specific product may be transmitted to the mobile communications device of an entity seeking warranty verification with respect to the specific product. As defined herein, the term “transmitted” may relate to the passing of data from one location, such as a warranty service center, to another location, such as an entity’s mobile communication device. The transmission of data may occur over at least one network in real time or may be transmitted to the mobile communication device prior to the initiation of a warranty verification session. For example, instructions, such as an instruction to capture an image may be downloaded to, or otherwise included in an interactive application on, the mobile communication device of an entity seeking warranty verification such that instructions are still received by a user in the event of temporary connectivity over at least one network (e.g., to the cloud).

[000189] As used herein, the term at least one network may refer to a single network or multiple networks. The network may include a plurality of networks having the same or different protocol stacks which may coexist on the same physical infrastructure. The network may constitute any type of physical or wireless computer networking arrangement used to exchange data. For example, a network may be the Internet, a private data network, a virtual private network using a public network, a Wi-Fi network, a LAN or WAN network, and/or any other suitable connections that may enable information exchange among various components of the system. In some embodiments, a network may include one or more physical links used to exchange data, such as Ethernet, coaxial cables, twisted pair cables, fiber optics, or any other suitable physical medium for exchanging data. A network may also include a public switched telephone network (“PSTN”) and/or a wireless cellular network. A network may be a secured network or unsecured network. In other embodiments, one or more components of the system may communicate directly through a dedicated communication network. Direct communications may use any suitable technologies, including, for example, BLUETOOTH™, BLUETOOTH LE™ (BLE), Wi-Fi, near field communications (NFC), or other suitable communication methods that provide a medium for exchanging data and/or information between the mobile communications device and the TSC. Signals are received over a network when they are obtained following transmission via the network.

[000190] As disclosed herein, a mobile communications device may refer to any device capable of exchanging data using any communications network. In some examples, the mobile communications device may include a smartphone, a tablet, a smart watch, mobile station, user equipment (UE), personal digital assistant (PDA), laptop, wearable sensor, e-Readers, dedicated terminals, smart glasses, virtual reality headset, loT device, and any other device, or combination of devices, that may enable user communication with a remote server. Such mobile communications devices may include a display such as an LED display, augmented reality (AR), virtual reality (VR) display, and any other display capable of depicting image data including image data corresponding to an interactive application for verifying a products warranty. The mobile communications device may also include an image sensor or any other device capable of detecting and converting optical input signals into electrical signals. The image sensor may be part of a camera included in, or connectable to, the entities mobile communications device.

[000191] Fig. 14 illustrates a sequence diagram depicting exemplary network communications between an entity’s mobile communications device 230 and the WSC 250 via at least one network 240 during operations of the remote artificial intelligence-assisted electronic warranty verification session 2100. Fig. 15 illustrates certain aspects of the remote artificial intelligence-assisted electronic warranty verification session 2100 from the perspective of the entity 220 seeking warranty verification with respect to a specific product 210, according to one embodiment. Figs. 16A-16B illustrate exemplary interactive applications relating to the remote artificial intelligence-assisted electronic warranty verification session 2100 as displayed on the entity’s mobile communications device 230 illustrated in Fig. 15. [000192] In Fig. 14, the WSC 250 may be configured to transmit information, such as instructions to capture an image, to the entity’s mobile communications device 230 over the at least one network 240 during certain operations of remote artificial intelligence-assisted electronic warranty verification session 2100. The WSC 250 may also be configured to receive information, such as data corresponding to captured images, from the mobile communications device 230 over the at least one network 240 during certain operations of the remote artificial intelligence- assisted electronic warranty verification session 2100. For example, the WSC 250 may transmit a product image capture instruction 2112 to the entity’s mobile communications device 230, receive at least one product image 2114 from the entity’s mobile communications device 230, transmit a purchase receipt image capture instruction 2116 to the entity’s mobile communications device 230, receive at least one purchase receipt image 2118 from the entity’s mobile communications device 230, and transmit an indication of warranty coverage 2120 with respect to a product for which warranty verification is sought to the entity’s mobile communications device 230 during certain operations of the remote artificial intelligence-assisted electronic warranty verification session 2100.

[000193] As illustrated in Fig. 15, the entity 220 seeking warranty verification with respect to a specific product 210 for which warranty verification is sought is depicted holding the mobile communications device 230 with a product image capture instruction 2112 displayed on a display unit 231 of the mobile communications device 230. The product image capture instruction 2112 may be transmitted to the entity’s mobile communications device 230 during the operations of the remote artificial intelligence-assisted electronic warranty verification session 2100 depicted in Fig. 14. The product image capture instruction 2112 may relate to an instruction and/or guidance directing the entity 220 to capture at least one product image 2114 of the specific product 210 for which warranty verification is sought and/or images of identifying items 211 (e.g., recognizable text such as characters and/or symbols) on said product 210. The product image capture instruction 2112 may be presented via an interactive application on a mobile communications device 230. The interactive application may enable the entity 220 to easily exchange information with the WSC 250 via at least one network 240 in order to obtain an indication of warranty coverage. Optionally, the interactive application may superimpose and anchor markers 234 onto image data relating to the image to be captured.

[000194] In one embodiment, the interactive application may enable the entity 220 seeking warranty verification to capture images of the specific product 210 for which warranty verification is sought and may enable the entity 220 to send image data corresponding to images captured by the mobile communications device 230 to the WSC 250 via at least one network 240. In the example illustrated in Fig. 16A, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 contains a product image capture instruction 2112 directing the entity 220 to capture at least one product image 2114 of the product 210 for which warranty verification is sought (e.g., “Please capture an image of your product.”). The entity 220 may capture at least one product image 2114 of the product 210 by selecting, for example by mouse click or finger tap, the widget 236 located on the display unit 231 of the mobile communications device 230 when the product 210 is within the camera’s field of view.

[000195] In another embodiment of the present disclosure, the instruction may include a direction to capture an image of a manufacturer’s product sticker. As defined herein, a manufacturer’s product sticker may relate to any label, stamp, engraving, writing, or marking on which information and/or symbols about the product or item may be printed, etched, or otherwise applied, or multiple combinations thereof, to the product for which warranty verification is sought. Information and/or symbols included on the manufacturer’s product sticker may include marks, logos, product codes (e.g., 2D barcode), or any other indications which may provide a unique identifier that manufacturers, resellers, logistics companies, retail outlets, and warranty service centers may use to quickly identify a specific product. It is to be understood that the manufacturer’s product sticker, as utilized herein, is not limited to a single product sticker of the supplier and may encompass a single product sticker and/or multiple product stickers from a variety of sources such as a manufacturer’s agent, resellers, logistics companies, retail outlets, or any other party which may apply identifying information on the product for which warranty verification is sought. [000196] In the example illustrated in Fig. 16B, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 contains a product image capture instruction 2113 directing the entity 220 to capture at least one product image 2115 of the product sticker 212 of the product 210 for which warranty verification is sought (e.g., “Please capture an image of the product sticker.”). The product sticker 212 may include multiple items (e.g., recognizable text such as characters and/or symbols) relating to the product 210 which may include a unique identifier which may be used to quickly identify the specific product 210. The entity 220 may capture at least one product image 2115 of the product sticker 212 of the product 210 by selecting the widget 236 located on the display unit 231 of the mobile communications device 230 when the product sticker 212 is within the camera’s field of view.

[000197] Some embodiments of the present disclosure may involve receiving the at least one product image. In a general sense, the at least one product image of a specific product may relate to the previously discussed at least one product image captured, or otherwise obtained, by an entity seeking warranty verification of the specific product. The at least one product image may be received by a warranty service center from a mobile communications device via at least one network. For example, an entity may capture and transmit a product image of the specific product for which warranty verification is sought to the warranty service center from the entity’s mobile communications device via at least one network. The at least one network may be similar to the at least one network previously discussed.

[000198] In the sequence diagram of Fig. 14 illustrating exemplary network communications between an entity’s mobile communications device 230 and the WSC 250 via at least one network 240, the WSC 250 may be configured to receive information, such as data corresponding to captured images, from the mobile communications device 230 over the at least one network 240 during certain operations of the remote artificial intelligence-assisted electronic warranty verification session 2100. As illustrated in Figs. 15 and 16A-16B, the mobile communications device 230 may enable an entity 220 to capture at least one product image 2114 of the specific product 210 and/or identifying items 211 (e.g., stickers or recognizable text such as characters and/or symbols) associated with said product 210. Upon capturing the at least one product image 2114, the WSC 250 may be configured to receive the at least one product image 2114 pertaining to the product for which warranty verification is sought from the entity’s mobile communications device 230 over the at least one network 240. In the example illustrated in Figs. 16A-16B, the WSC 250 may receive the at least one product image 2114 once the entity 220 selects the widget 236 of the interactive application located on the display unit 231 of the mobile communications device 230.

[000199] Some embodiments of the present disclosure may involve performing product image analysis on the at least one product image to identify at least one product-distinguishing characteristic. As defined herein, performing an image analysis may refer to any process in which a computer or electrical device automatically studies an image or image data to obtain or extract useful information from it. Alternatively, or additionally, performing an image analysis may include analyzing the image data to obtain reprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. One of ordinary skill in the art will recognize that the following are nonlimiting examples and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may comprise the transformed image data. For example, the transformed image data may comprise one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may comprise a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may comprise: a representation of at least part of the image data in a frequency domain, a Discrete Fourier Transform of at least part of the image data, a Discrete Wavelet Transform of at least part of the image data, a time/frequency representation of at least part of the image data, a representation of at least part of the image data in a lower dimension, a lossy representation of at least part of the image data, a lossless representation of at least part of the image data, a time ordered series of any of the above, and/or any combination of the above. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to edges, comers, blobs, ridges, Scale Invariant Feature Transform (SIFT) features, temporal features, and so forth.

[000200] In some embodiments, analyzing image data may comprise analyzing the image data and/or the preprocessed image data using one or more rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, action detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and so forth. Some non-limiting examples of such inference models may include: an inference model preprogrammed manually; a classification model; a regression model; a result of training algorithms, such as machine learning algorithms and/or deep learning algorithms, on training examples, where the training examples may include examples of data instances, and in some cases, a data instance may be labeled with a corresponding desired label and/or result. Additionally, analyzing image data may comprise analyzing pixels, voxels, point cloud, range data, etc. included in the image data.

[000201] As used herein, performing product image analysis may relate to any of the above-mentioned techniques for performing image analysis with respect to the previously discussed at least one product image in order to identify at least one product-distinguishing characteristic. The at least one product-distinguishing characteristic may relate to any distinguishing trait, quality, and/or property of the product and/or identifying items (e.g., stickers, logos, or recognizable text such as characters and/or symbols) unique to said product. The control system of the warranty service center may recognize a purchased product by visually analyzing the product image based on at least one product-distinguishing characteristic of the purchased product. Additionally, or alternatively, the control system of the warranty service center may recognize product specific details by visually analyzing a sticker of the product based on at least one product-distinguishing characteristic of the sticker. The at least one product-distinguishing characteristic may enable the system to recognize or otherwise identify, by way of visual analysis, the specific product for which warranty verification is sought.

[000202] Fig. 17 is a flow chart illustrating exemplary image analysis operations of the remote artificial intelligence-assisted electronic warranty verification session related to at least one product image, consistent with some embodiments of the present disclosure. The following non-limiting embodiments are presented with reference to the flow chart of Fig. 17 together with the block diagram of Fig. 13 and the exemplary interactive applications illustrated in Figs. 16A-16B.

[000203] During the remote artificial intelligence-assisted electronic warranty verification session, a system of the warranty service center may be configured to receive at least one product image at Step 2201 , perform product image analysis on the at least one product image at Step 2203, and identify at least one productdistinguishing characteristic at Step 2205. In one embodiment, the control system 260 of Fig. 13 may be configured to receive the at least one product image from the mobile communications device via I/O unit 268. The at least one product image received from the mobile communications device received at Step 2201 may be processed and analyzed by the control unit 261 at Steps 2203 and 2205. For example, the image processing module 2161 of the control unit 261 may be configured and operable to process the at least one product image of the specific product for which warranty verification is sought and the image analysis module 2163 may be configured and operable to analyze the at least one product image.

[000204] Upon performing product image analysis on the at least one product image at Step 2203, the image analysis module 2163 may be configured to identify at least one product-distinguishing characteristic of the specific product for which warranty verification is sought at Step 2205. Various image analysis and/or processing tools may be employed by the control unit 261 , for example the image analysis module 2163, to analyze the at least one product image and identify at least one product-distinguishing characteristic of the product. Optionally, certain image processing and/or image analysis operations may be performed by the entity’s mobile communication device and/or a remote server. In some embodiments, a memory (e.g., memory unit 266) and/or data structures may be accessed by the control unit 261 to enable the control unit 261 to identify at least one productdistinguishing characteristic based on reference data stored in the memory and/or data structures. For example, the memory unit 266 may store a variety of distinguishing traits, qualities, and/or properties corresponding to a number of products and/or identifying items unique to a number of products.

[000205] In another embodiment of the present disclosure, the product image analysis may include using artificial intelligence to distinguish the product from other products having similar appearances. As defined herein, using artificial intelligence during product image analysis may refer to any of the above discussed artificial intelligence techniques. Such techniques may enable machine learning through absorption of significant volumes of unstructured data such as text, images, and/or videos, as well as user preferences analyzed over a period of time. For example, product image analysis may relate to employing machine learning, deep learning, and/or neural network processing techniques to perform image analysis with respect to the previously discussed at least one product image in order to identify at least one product-distinguishing characteristic and/or distinguish the product at issue from other products having similar appearances. Other products having similar appearances may refer to any trait, quality, and/or property of the product and/or identifying items which may closely resemble, or in some instances be identical to, products other than the product for which warranty verification is sought. Any suitable computing system or group of computing systems may be used to implement the analysis of the at least one product image using artificial intelligence.

[000206] During the remote artificial intelligence-assisted electronic warranty verification session, a system of the warranty service center may be configured to receive at least one product image at Step 2201 , perform product image analysis on the at least one product image at Step 2203, identify at least one productdistinguishing characteristic at Step 2205, and distinguish the product from other products having similar appearances using artificial intelligence at Step 2207. Upon identifying at least one product-distinguishing characteristic of the specific product for which warranty verification is sought at Step 2205, various artificial intelligence techniques may be employed by the control unit 261 , for example the image analysis module 2163, to analyze the at least one product image and distinguish the product from other products having similar appearances.

[000207] In another embodiment of the present disclosure, the product image analysis may include performing optical character recognition on the product image. As used herein, the term optical character recognition (OCR) may refer to the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text. Using OCR on the at least one product image may relate to the processing of an image or images containing text which may be converted into machine-readable form. The product image analysis may include using OCR on the at least one product image in order to identify at least one product-distinguishing characteristic from text on the product and/or distinguish the product at issue from other products having similar markings.

[000208] During the operations of the remote artificial intelligence-assisted electronic warranty verification session illustrated in Fig. 17, OCR may be performed on the product image before, during, or after Step 2203, Step 2205, and or Step 2207. For example, upon receiving the product image from the mobile communications device at Step 2201 , the OCR module 2162 of the control unit 261 illustrated in Fig. 13, may be configured to perform OCR on text included in the product image such that characters located in the image may be identified and converted to machine readable text. The OCR module 2162 may be configured to perform OCR before the image analysis module 2163 analyzes the at least one product image at Step 2203. In another example, the OCR module 2162 of the control unit 261 may be configured to perform OCR on the product image such that the image processing module 2161 may be configured to utilize the readable text from the product image to identify the at least one product image at Step 2205 and/or distinguish the product in the at least one product image from other products having similar appearances at Step 2207 by more precisely comparing text on the product for which warranty verification is sought against other products having similar markings. [000209] In another embodiment of the present disclosure, when the product image capture instruction includes a direction to capture an image of a manufacturer’s product sticker, the product image analysis may include employing artificial intelligence to interpret the manufacturer’s product sticker. The direction to capture an image of a manufacturer’s product sticker may be similar to the direction to capture an image of a manufacturer’s product sticker previously discussed. Additionally, in a general sense, employing artificial intelligence to interpret the manufacturer’s product sticker may occur in a manner similar to using artificial intelligence during product image analysis, as previously discussed.

[000210] In the example illustrated in Fig. 16B, a system of the warranty service center may be configured to perform artificial intelligence-based image analysis on the at least one product image 2115 to interpret the manufacturer’s product sticker 212. In one embodiment, the OCR module 2162 and/or the image analysis module 2163 of the control unit 261 illustrated in Fig. 13 may be configured and operable to process and analyze the at least one product sticker 212 of the product 210 for which warranty verification is sought. For example, upon receiving the product image from the mobile communications device at Step 2201 of Fig. 17, the OCR module 2162 and/or the image analysis module 2163 may be configured to perform OCR on text included in the product image such that characters contained in the image may be identified and converted to machine readable text which may be analyzed using artificial intelligence.

[000211] In one embodiment, the image analysis module 2163 may be configured to analyze the at least one product image containing at least one product sticker using artificial intelligence at Step 2203. In another embodiment, the image analysis module 2163 may be configured to utilize the readable text from the product image containing at least one product sticker to identify the at least one product using artificial intelligence at Step 2205. In yet another embodiment, the image analysis module 2163 may be configured to distinguish the product in the at least one product image from other products using artificial intelligence at Step 2207 by more precisely comparing text on the sticker of the product for which warranty verification is sought against other products having similar markings. Optionally, the image analysis module 2163 may be configured to perform image analysis of the product sticker contained in the at least one product image before, during, or after Step 2203, Step 2205, and or Step 2207.

[000212] Some embodiments of the present disclosure may involve transmitting an instruction to the entity to capture an image of a purchase receipt for the specific product. As described herein, a purchase receipt for the specific product may relate to a single image, a burst of images, a screenshot, and/or a video of a receipt including the product for which warranty verification is sought. The purchase receipt may relate to any medium (physical or digital) which acknowledges that value has been transferred at a certain time, establishes ownership of an item, or otherwise represents proof of a transaction. The purchase receipt may include a store name, date, purchased product, purchase price, original price, item number, SKU number, barcode, and/or any other identifying information associated with a purchased item or items and the transaction details of said item or items. In a general sense, the instruction to capture an image of a purchase receipt for the specific product may be similar to aspects of the instruction to capture at least one product image of a specific product previously discussed. Moreover, transmitting the instruction to the entity to capture an image of a purchase receipt for the specific product may occur in a manner similar to transmitting the instruction to an entity to capture at least one product image of a specific product previously discussed.

[000213] Turning back to the sequence diagram of Fig. 14 illustrating exemplary network communications between an entity’s mobile communications device 230 and the WSC 250 via at least one network 240, the WSC 250 may be configured to transmit a purchase receipt image capture instruction 2116 pertaining to the product for which warranty verification is sought to the entity’s mobile communications device 230 over the at least one network 240 druing a remote artificial intelligence-assisted electronic warranty verification session 2100. Fig. 18 illustrates one exemplary interactive application including a purchase receipt image capture instruction 2116, as displayed on a mobile communications device 230, during the remote artificial intelligence-assisted electronic warranty verification session 2100 illustrated in Fig. 14. [000214] The purchase receipt image capture instruction 2116 may relate to an instruction and/or guidance directing the entity to capture at least one purchase receipt image 2118 of a purchase receipt 213 identifying the specific product for which warranty verification is sought. The purchase receipt 213 may include identifying and/or transaction information associated with a purchased item. In one embodiment, the purchase receipt 213 may include a listing 214 of the specific product, a purchase price 215 of the specific product, a store name 216 of the store from which the specific product was purchased, a purchase date 217, and/or a barcode 218. Additionally, the purchase receipt 213 may include a plurality of purchased products 219 including the specific product for which warranty verification is sought.

[000215] In one embodiment, the purchase receipt image capture instruction 2116 may be presented via an interactive application on the mobile communications device 230. In a general sense, the purchase receipt image capture instruction 2116 may be presented via an interactive application in a manner similar to the presentation of the instruction to an entity to capture at least one product image of a specific product previously discussed. In the example illustrated in Fig. 18, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 contains a purchase receipt image capture instruction 2116. The purchase receipt image capture instruction 2116 may include an instruction directing the entity to capture at least one purchase receipt image 2118 of the purchase receipt 213 including the listing 214 of the specific product for which warranty verification is sought (e.g., “Please capture an image of the receipt for the product.”). The entity may capture at least one purchase receipt image 2118 of the purchase receipt 213 by selecting the widget 236 located on the display unit 231 of the mobile communications device 230 when the purchase receipt 213 is within the camera’s field of view.

[000216] Some embodiments of the present disclosure may involve receiving the purchase receipt image. In a general sense, the purchase receipt image pertaining to a specific product for which warranty verification is sought may relate to the previously discussed purchase receipt image captured, or otherwise obtained, by an entity seeking warranty verification of the specific product. The purchase receipt image may be received by a warranty service center from a mobile communications device via at least one network in a manner similar to receiving the at least one product image of a specific product previously discussed. For example, an entity may capture and transmit a receipt image of the receipt pertaining to the specific product for which warranty verification is sought to the warranty service center from the entity’s mobile communications device via at least one network. The at least one network may be similar to the at least one network previously discussed.

[000217] In the sequence diagram of Fig. 14 illustrating exemplary network communications between an entity’s mobile communications device 230 and the WSC 250 via at least one network 240, the WSC 250 may be configured to receive the purchase receipt image 2118 from the entity’s mobile communications device 230 during the remote artificial intelligence-assisted electronic warranty verification session 2100. As illustrated in Fig. 18, the mobile communications device 230 may enable an entity to capture a purchase receipt image 2118 corresponding to the purchase receipt 213. The purchase receipt image 2118 may identify a listing 214 of the specific product, a purchase price 215 of the specific product, a store name 216 of the store from which the specific product was purchased, a purchase date 217, a barcode 218, as well as a plurality of purchased products. Upon capturing the purchase receipt image 2118, the WSC 250 may be configured to receive the purchase receipt image 2118 identifying the specific product for which warranty verification is sought from the entity’s mobile communications device 230 over the at least one network 240. In the example illustrated in Fig. 18, the WSC 250 may receive the at least one product image 2114 once the entity 220 selects the widget 236 of the interactive application located on the display unit 231 of the mobile communications device 230.

[000218] Some embodiments of the present disclosure may involve performing receipt image analysis on the received purchase receipt image to identify product purchase information including a purchased product identity and a purchase date. As used herein, performing receipt image analysis may relate to any of the above mentioned techniques for performing image analysis with respect to the previously discussed purchase receipt image in order to identify product purchase information. The product purchase information may refer to any identifying and/or transaction information associated with a purchased item which may enable the system to recognize or otherwise identify, by way of visual analysis, the purchase details of the product for which warranty verification is sought. The identifying information in the receipt may include a store name, date, purchased product, purchase price, original price, item number, SKU number, barcode, and/or any other identifying information associated with a purchased item or the transaction details of said item. In a general sense, performing receipt image analysis on the received purchase receipt image may occur in a manner similar to performing product image analysis, as previously discussed.

[000219] Fig. 19 is a flow chart illustrating exemplary image analysis operations of the remote artificial intelligence-assisted electronic warranty verification session related to a purchase receipt image, consistent with some embodiments of the present disclosure. The following non-limiting embodiments are presented with reference to the flow chart of Fig. 19 together with the block diagram of Fig. 13 and the exemplary interactive application illustrated in Fig. 18.

[000220] During the operations of the remote artificial intelligence-assisted electronic warranty verification session illustrated in Fig. 19, a system of the warranty service center may be configured to receive the purchase receipt image at Step 2202, perform receipt image analysis on the received purchase receipt image at Step 2204, and identify product purchase information at Step 2206. In one embodiment, the control system 260 of Fig. 13 may be configured to receive the purchase receipt image from the mobile communications device via I/O unit 268. The purchase receipt image received from the mobile communications device at Step 2202 may be processed and analyzed by the control unit 261 at Steps 2204 and 2206. For example, the image processing module 2161 of the control unit 261 may be configured and operable to process the purchase receipt image of the product for which warranty verification is sought and the image analysis module 2163 may be configured and operable to analyze the purchase receipt image.

[000221] Upon performing purchase receipt image analysis on the at least one purchase receipt image at Step 2203, the image analysis module 2163 may be configured to identify product purchase information contained in the product receipt such as the product identity and the purchase date of the specific product for which warranty verification is sought at Step 2205. The image analysis module 2163 may be further configured to identify product purchase information from a store name, purchase price, original price, item number, SKU number, barcode, and/or any other identifying information associated with a purchased item or items and the transaction details of said item or items. Various image analysis and/or processing tools may be employed by the control unit 261 with respect to the image purchase receipt. In a general sense, performing image analysis to interpret the purchase receipt image, or otherwise identify product purchase information may occur in a similar manner to performing image analysis of the product image containing the product and/or manufacturer’s product sticker. For example, the receipt image analysis may include employing artificial intelligence and/or OCR in a manner similar to employing artificial intelligence and/or OCR to interpret the product image containing the product and/or manufacturer’s product sticker, as previously discussed. In employing artificial intelligence and/or OCR to analyze the purchase receipt image, the image analysis module 2163 may be configured to identify an identity of the purchased product, the purchase date, and/or an identity of an establishment from which the product was purchased.

[000222] Another embodiment of the present disclosure may involve determining that the image of the purchase receipt identifies a plurality of purchased products and transmitting a request to the entity to identify a specific one of the plurality of purchased products. As used herein, a plurality of purchased products may relate to any number of products that are included on the purchase receipt containing the product for which warranty verification is sought. Alternatively, the plurality of purchased products may relate to any number of products that are included in the field of view of the mobile communications device’s camera when the image of the purchase receipt is captured. In a general sense, performing receipt image analysis on the received purchase receipt image containing a plurality of purchased products may occur in a manner similar to performing product image analysis, as previously discussed. Additionally, the request may be transmitted to the mobile communications device in a manner similar to the instruction to capture at least one product image of the specific product and/or the instruction to capture an image of the purchase receipt.

[000223] The plurality of purchased products contained in the receipt may share some similarities in identifying information such as a store name, date, purchased product, purchase price, original price, item number, SKU number, barcode, and/or any other identifying information associated with a purchased item or the transaction details of said item. In some instances, the system may be able to determine the relevant product based on the product image previously obtained. In other instances, the system may request that the entity identify a specific one of the plurality of purchased products.

[000224] The request to identify a specific one of the plurality of purchased products may relate to, for example, a physical indication such as pointing out the specific product on the receipt with a pointer (e.g., a finger or pen) or marking the specific product on the receipt (e.g., circling or highlighting the product on the receipt). Alternatively, the request to the entity to identify a specific one of the plurality of purchased products may relate to, for example, a digital indication such as annotating and/or marking the specific product on the receipt via an interactive application. In another embodiment of the present disclosure, the request to identify the specific one of the plurality of purchased products includes a request to capture an image of the receipt with an indication in the image identifying the specific one of the plurality of purchased products. In a general sense, the request to capture an image of the purchase receipt with an indication in the image identifying the specific one of the plurality of purchased products may be similar to the request to the entity to capture an image of the purchase receipt for the product, as previously discussed.

[000225] Fig. 20 is a flow chart illustrating exemplary image analysis operations of the remote artificial intelligence-assisted electronic warranty verification session related to a purchase receipt image 2118 of a purchase receipt 213 containing a plurality of purchased products 219, as illustrated in Fig. 18. During the operations of the remote artificial intelligence-assisted electronic warranty verification session illustrated in Fig. 20, a system of the warranty service center may be configured to receive the purchase receipt image at Step 2302, perform receipt image analysis on the received purchase receipt image at Step 2304, determine that the image of the purchase receipt identifies a plurality of purchased products at Step 2306, transmit a request to the entity to identify a specific one of the plurality of purchased products at Step 2308, receive the marked purchase receipt image identifying a specific one of the plurality of purchased products at Step 2310, perform receipt image analysis on the received marked purchase receipt image at Step 2312, and identify product purchase information at Step 2314.

[000226] According to some embodiments of the present disclosure, the remote artificial intelligence-assisted electronic warranty verification operations may involve accessing a universal data structure containing data on products offered by a plurality of suppliers. As used herein, the term data structure may relate to a more advanced knowledge-base database from which options, such as a fixed set of prioritized operations, are selected based on some environment of usage conditions. The data structure may include an artificial intelligence-based system that has been trained with data of past cases, their conditions, and/or the optimal solution of each case. The data structure may include one or more memory devices that store data and/or instructions. The data structure may utilize a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, other type of storage device or tangible or non-transitory computer readable medium, or any medium or mechanism for storing information. The data structure may include any plurality of suitable data structures, ranging from small data structures hosted on a workstation to large data structures distributed among data centers. The data structure may also include any combination of one or more data structures controlled by memory controller devices (e.g., server(s), etc.) or software.

[000227] The data structure may include a database which may relate to any collection of data values and relationships among them. The data may be stored linearly, horizontally, hierarchically, relationally, non-relationally, uni-dimensionally, multidimensionally, operationally, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, or in any manner enabling data access. By way of non-limiting examples, data structures may include an array, an associative array, a linked list, a binary tree, a balanced tree, a heap, a stack, a queue, a set, a hash table, a record, a tagged union, ER model, and a graph. For example, a data structure may include an XML database, an RDBMS database, an SQL database, or NoSQL alternatives for data storage/search such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Sole, Cassandra, Amazon DynamoDB, Scylla, HBase, and Neo4J. A data structure may be a component of the disclosed system or a remote computing component (e.g., a cloudbased data structure). Data in the data structure may be stored in contiguous or non-contiguous memory. Moreover, a data structure, as used herein, does not require information to be co-located. It may be distributed across multiple servers, for example, that may be owned or operated by the same or different entities. Thus, the term “data structure” as used herein in the singular is inclusive of plural data structures.

[000228] Accessing a universal data structure, as used herein, may include reading and/or writing information from/to the universal data structure. The universal data structure may include any repository of data on a variety of products offered by a plurality of suppliers. The repository may include one or more of product data, transaction data, and supplier data. The repository may be located in one location or distributed across a plurality of locations. The data in the universal data structure may relate to any identifying information corresponding to any product offered by a plurality of suppliers stored in a data structure of, or otherwise accessible to, the warranty service center. The term plurality of suppliers, as used herein, may include manufacturers, agents of the manufacturers, resellers, logistics companies, retail outlets, and/or any other party which may be involved in handling a product during any point of the supply chain. The types of identifying information contained in the universal data structure may be similar to the types of identifying information associated with a purchased product or product sticker included on/in said product and/or may be similar to the types of transaction details corresponding to said product, as discussed above.

[000229] Fig. 21 is a functional block diagram schematically illustrating certain aspects of a remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure. The following non-limiting embodiments are presented with reference to the functional block diagram of Fig. 21 together with the block diagram of the control system 260 illustrated in Fig. 13.

[000230] In Fig. 21 , the control system 260 of the WSC 250, may be connected to the universal data structure 270 and the warranty data structure 290 via at least one network and/or at least one server. The control system 260 may be configured to access information stored in the universal data structure 270 and/or information stored in the warranty data structure 290 to interpret information pertaining to the product for which warranty verification is sought as provided by an entity’s mobile communications device 230. The universal data structure 270 may include data on a variety of products offered by a plurality of suppliers including product data 271 , transaction data 272, and supplier data 273. In one embodiment, the first processing unit 262 may be configured to access information stored in the universal data structure 270 via the comparison module 2164 of the first processing unit 262 illustrated in Fig. 13. The warranty data structure 290 may include product data 291 , customer data 292, and warranty data 293. The second processing unit 264 may be configured to access information stored in the warranty data structure 290 via the external data access module 2167 of the second processing unit 264 illustrated in Fig. 13.

[000231] Some embodiments of the present disclosure may involve using the at least one product-distinguishing characteristic obtained from the image analysis on the product image and the product purchase information obtained from the image analysis on the purchase receipt to identify in the universal data structure the specific product. The at least one product-distinguishing characteristic obtained from the product image analysis may be the same as the at least one product-distinguishing characteristic previously discussed. Additionally, the product purchase information obtained from the purchase receipt image analysis may be the same as the product purchase information previously discussed. Identifying information including the at least one product-distinguishing characteristic and the product purchase information may be used to identify, in the universal data structure, the specific product for which warranty verification is sought by cross checking the identifying information obtained from the analysis of the at least one product image and/or purchase receipt image against information stored in the universal data structure. The identifying information may be cross checked against information stored in the universal data structure to verify the correctness and/or authenticity of the product and to obtain supplier data associated with said product which may be used to determine warranty coverage information.

[000232] Fig. 22 is a flow chart illustrating exemplary operations of the remote artificial intelligence-assisted electronic warranty verification session, consistent with at least one embodiment of the present disclosure. The following non-limiting embodiments are presented with reference to the flow chart of Fig. 22 together with the functional block diagram of Fig. 21 and the block diagram of the control system 260 illustrated in Fig. 13.

[000233] During the remote artificial intelligence-assisted electronic warranty verification session, a control system 260 of the WSC 250 illustrated in Fig. 21 may be configured to receive at least one product image at Step 2401 , perform product image analysis on the at least one product image at Step 2403, and identify at least one product-distinguishing characteristic at Step 2405. In a general sense, Steps 2401 , 2403, and 2405 may occur in a manner similar to Steps 2201 , 2203, and 2205 previously discussed. Additionally, the control system 260 of the WSC 250 illustrated in Fig. 21 may be configured to receive the purchase receipt image at Step 2402, perform receipt image analysis on the received purchase receipt image at Step 2404, and identify product purchase information at Step 2406. In a general sense, Steps 2402, 2404, and 2406 may occur in a manner similar to Steps 2202, 2204, and 2206 previously discussed.

[000234] Upon identifying at least one product-distinguishing characteristic from the at least one product image at Step 2405 and product purchase information from the purchase receipt image at Step 2406, the control system 260 of the WSC 250 illustrated in Fig. 21 may be configured to access the universal data structure 270 containing stored reference data on a variety of products offered by a plurality of suppliers including product data 271 , transaction data 272, and supplier data 273 at Step 2408. Upon accessing the universal data structure 270, the comparison module 2164 of the control system 260 illustrated in Fig. 13 may be configured and operable to, at Step 2410, consult the universal data structure 270 to compare the at least one product-distinguishing characteristic and/or the product purchase information against reference data (e.g., product data 271 , transaction data 272, and supplier data 273) stored in the universal data structure 270 in order to identify the specific product for which warranty verification is sought.

[000235] In some embodiments, artificial intelligence may be employed by the control system 260 of the WSC 250 and/or by the universal data structure 270 to identify the specific product for which warranty verification is sought at Step 2410. For example, the comparison module 2164 of the control system 260 illustrated in Fig. 13 may be configured and operable to consult the universal data structure 270 and employ artificial intelligence using data stored in the universal data structure 270 pertaining to “lessons” learned from past support sessions related to a certain class of problem. Additionally, or alternatively, the universal data structure 270 include an artificial intelligence system which may be trained with data corresponding to a variety of products offered by a plurality of suppliers including product data 271 , transaction data 272, and/or supplier data 273, their conditions, and/or optimal information obtained during previously conducted remote artificial intelligence- assisted electronic warranty verification sessions.

[000236] For example, the universal data structure 270 may be configured and operable to log and analyze the entities and/or suppliers’ interactions with the WSC 250 during past warranty verification sessions to quickly recognize identifying and/or transaction information associated with a given product based on trained data at Step 2410 of Fig. 22. In this way, the universal data structure 270 may be dynamically constructed such that artificial intelligence techniques, for example machine learning algorithms and/or deep learning algorithms, may be used to rank/weigh each data structure record of the universal data structure 270 according to the number of times it was successfully used to identify a product of a particular type/classification.

[000237] In another embodiment of the present disclosure, when the image of the purchase receipt identifies a plurality of purchased products, operations may involve applying artificial intelligence to information from the universal data structure in order to match one of the plurality of purchased products on the receipt with the product-distinguishing characteristic determined from the product image in order to determine the corresponding specific product. In a general sense, applying artificial intelligence to information from the universal data structure in order to match one of the plurality of purchased products on the receipt with the product-distinguishing characteristic determined from the product image in order to determine the corresponding specific product may occur in a manner similar to, and may employ similar techniques as, applying artificial intelligence to identify the specific product for which warranty verification is sought using the product-distinguishing characteristic determined from the product image in view of information stored in the universal data structure, as previously discussed.

[000238] The following embodiments are presented with reference to the flow chart of Fig. 22 together with the functional block diagram of Fig. 21 and the block diagram of the control system 260 illustrated in Fig. 13. Upon identifying at least one product-distinguishing characteristic from the at least one product image at Step 2405 and identifying that the purchase receipt image contains a plurality of purchased products at Step 2406, the comparison module 2164 of the control system 260 illustrated in Fig. 13 may be configured to access the universal data structure at Step 2408. In this non-limiting example, the product purchase information identified at Step 2406 may conditionally correspond to each of the plurality of purchased products identified in the purchase receipt image.

[000239] Upon accessing the universal data structure at Step 2408, the control system 260 may be configured to use the at least one product-distinguishing characteristic and the product purchase information, including the plurality of purchased products, to identify the specific product in the universal data structure 270 at Step 2410. For example, the control system 260 illustrated in Fig. 21 may utilize artificial intelligence to compare the product-distinguishing characteristic determined from the product image against any number of the plurality of purchased products and employ artificial intelligence using data stored in the universal data structure 270 to match one of the plurality of purchased products on the receipt with the product-distinguishing characteristic. In matching one of the plurality of purchased products on the receipt with at least one product-distinguishing characteristic, the control system 260 may utilize the matched data, as well as stored reference data in the universal data structure 270, in order to determine the corresponding specific product for which warranty verification is sought at Step 2410.

[000240] Some embodiments of the present disclosure may involve identifying in the universal data structure a supplier of the specific product. As discussed above, the term supplier may refer to manufacturers, agents of the manufacturers, resellers, logistics companies, retail outlets, and/or any other party which may be involved in handling a product during any point of the supply chain. As used herein, a supplier of the specific product may relate to any supplier which may possess, or otherwise have access to, warranty information pertaining to the specific product for which warranty verification is sought. In a general sense, identifying a supplier of the specific product in the universal data structure may occur in a manner similar to identifying the specific product in the universal data structure, as previously discussed.

[000241] Referring back to the flow chart illustrated in Fig. 22, upon accessing the universal data structure at Step 2408 and consulting the universal data structure to compare the at least one product-distinguishing characteristic and/or the product purchase information against reference data stored in the universal data structure in order to identify the specific product for which warranty verification is sought, the control system 260 of the WSC 250 illustrated in Fig. 21 may be configured to identify a supplier of the specific product from information stored in the universal data structure 270 containing stored reference data on a plurality of suppliers as supplier data 273. In one embodiment, the supplier identified in the universal data structure may relate to at least a manufacturer and/or a manufacturer’s agent.

[000242] Some embodiments of the present disclosure may involve identifying in the universal data structure a link to a warranty data structure of the supplier. As used herein, a link may include any tag, marker, code, or string that defines the relationship between a current location and an external resource. In this instance, a warranty data structure of a supplier may be the external resource. The resource may be considered external if it is maintained by an entity other than the entity that maintains the universal data structure, or if the data is maintained separately from the universal data structure. The data structure of the supplier may be a data structure maintained by or on behalf of a product manufacturer, distributor or an agent associated with the manufacturer or the distributor. For example, a purchaser of a product may register the product's serial number and purchase date in a manner that causes such information to be stored in a warranty data structure of a supplier. The warranty data structure of the supplier may include records of individual purchases and may include more general information about warranty terms associated with particular products provided by the supplier. Alternatively or additionally, the warranty terms may be maintained in the universal data structure. Technically, a link may refer to any type of datalink which may connect one location to another for the purpose of transmitting and/or receiving digital information. The link may enable simplex communications, half-duplex communications, and/or duplex communications with the warranty data structure. In a general sense, identifying a link to a warranty data structure of the supplier in the universal data structure may occur in a manner similar to identifying the specific product in the universal data structure, as previously discussed.

[000243] In the functional block diagram of Fig. 21 , the universal data structure 270 may contain supplier data 273 on a plurality of suppliers, as well as links corresponding to each of the plurality of suppliers in the supplier data 273. The supplier data 273 stored on the universal data structure 270 may include a link which may enable access to a warranty data structure 290 of the supplier. The link may be used to enable the WSC 250 to communicate with a specific supplier to obtain information corresponding to the identified specific product for which warranty verification is sought. Referring to the flow chart illustrated in Fig. 22, upon identifying a supplier of the specific product from information stored in the universal data structure 270 at Step 2412, the control system 260 of the WSC 250 may be configured to identify in the universal data structure 270 a link to a warranty data structure 290 at Step 2414.

[000244] In another embodiment of the present disclosure, the universal data structure may include an authorization code for accessing the warranty data structure of the supplier. In some instances, access to the warranty information stored in the warranty data structure may be controlled, or otherwise limited, by the supplier. When access to the warranty information stored in the warranty data structure is controlled by the supplier, authorization to access the data may need to be requested by a party seeking access to said information. As used herein, authorization may refer to any permission to use, or otherwise access, software and/or equipment and may be granted by way of an authorization code. An authorization code may include password-based authentication, multi-factor authentication, certificate-based authentication, token-based authentication, or any other set of numbers and/or letters that may be entered into a computer system to prove official permission to use, or otherwise access, software and/or equipment. In some instances, the authorization code may be linked to the universal data structure, such that requests emanating via the universal data structure are automatically authorized. In other instances, specific authorization may be provided from the entity seeking warranty coverage.

[000245] In the present disclosure, the universal data structure 270 may contain unique authorization codes as authorization data 274 which may correspond to respective warranty data structures of the plurality of suppliers. In one embodiment, the authorization code stored in, or otherwise accessible to, the universal data structure 270 may permit the control system 260 to use, or otherwise access, the link to the warranty data structure 290 of the supplier. Referring to the flow chart illustrated in Fig. 22, upon identifying a supplier of the specific product from information stored in the universal data structure 270 at Step 2412, the control system 260 of the WSC 250 may be configured to identify a link to the warranty data structure 290 of the supplier as well as an authorization code for accessing the warranty data structure 290 in the universal data structure 270 at Step 2414.

[000246] Some embodiments of the present disclosure may involve accessing the link to perform a remote lookup of the specific product in the warranty data structure of the supplier. In a general sense, the warranty data structure may be similar to the data structures previously discussed. Additionally, the link used to enable access to the warranty data structure of the supplier may be similar to the link to the warranty data structure of the supplier previously discussed. [000247] As used herein, accessing the link to perform a remote lookup of the specific product in the warranty data structure of the supplier may enable the warranty service center to read and/or write information, such as entity-specific warranty data on products offered by a given supplier, from/to the warranty data structure of the supplier via at least one network. Once the warranty service center accesses the link to the warranty data structure of the supplier, a system of the warranty data structure may be able to remotely lookup, search, or otherwise obtain, information stored in the warranty data structure pertaining to a specific product for which warranty verification is sought. The remote lookup may be conducted based on the previously identified specific product in the universal data structure and may be used to verify warranty eligibility of the specific product.

[000248] Fig. 23 is a functional block diagram schematically illustrating exemplary network communications between the WSC 250 and the supplier 280 during certain operations of the remote artificial intelligence-assisted electronic warranty verification session. In one embodiment, the control system 260 of the WSC 250 may be configured to communicate with the warranty data structure 290 of the supplier 280 in possession of entity-specific warranty data on products provided by the supplier 280. For example, the second processing unit 264 of the control system 260 may include an external data access module 2167 and an external data processing module 2168 configured to communicate with the warranty data structure 290. The external data access module 2167 may be configured and operable to transmit a remote lookup request 2126 to the supplier 280 and to access product data 291 , customer data 292, and/or warranty data 293 stored in the warranty data structure 290 of the supplier 280.

[000249] Fig. 24 is a flow chart illustrating exemplary remote artificial intelligence-assisted electronic warranty verification operations in which the WSC may access the link to the warranty data structure of the supplier at Step 2502, perform a remote lookup of the specific product in the warranty data structure of the supplier at Step 2504, and receive a warranty coverage indication from the warranty data structure of the supplier at Step 2506. At Step 2502, the external data access module 2167 illustrated in Fig. 23 may be configured to access information stored in the warranty data structure 290 via a network link. Accessing the information stored in the warranty data structure 290 may involve transmitting a remote lookup request 2126 to the supplier 280. Upon transmitting a remote lookup request 2126 to the supplier 280, a remote lookup of the specific product in the warranty data structure 290 of the supplier 280 may be performed at Step 2504. The remote lookup request 2126 of the specific product in the warranty data structure 290 may be performed directly or indirectly by the external data access module 2167 and/or the external data processing module 2168 and may enable the external data access module 2167 and/or the external data processing module 2168 to access product data 291 , customer data 292, and/or warranty data 293 stored in the warranty data structure 290. Optionally, the remote lookup request 2126 may be a request to the supplier 280 to access product data 291 , customer data 292, and/or warranty data 293 stored in the warranty data structure 290.

[000250] In another embodiment of the present disclosure, the universal data structure may include an authorization code for accessing the warranty data structure of the supplier and accessing the link may include transmitting the authorization code to the supplier. In a general sense, the authorization code for accessing the warranty data structure of the supplier may be the same as the authorization code for accessing the warranty data structure of the supplier previously discussed and may be transmitted in a manner similar to the previously discussed lookup request. Additionally, aspects of accessing the link may occur in a manner similar to accessing the link to the warranty data structure of the supplier previously discussed.

[000251] Fig. 25 illustrates a sequence diagram depicting exemplary network communications between the WSC 250 and the supplier 280 via at least one network 240 during certain operations of a remote artificial intelligence-assisted electronic warranty verification session. The communications between the WSC 250 and the supplier 280 may include an authorization code 2222, authorization permission 2224, a remote lookup request 2226, and a warranty coverage indication 2228. Fig. 26 is a flow chart illustrating exemplary remote artificial intelligence- assisted electronic warranty verification operations, consistent with another embodiment of the present disclosure. The following embodiments are presented with reference to the sequence diagram of Fig. 25 and the flow chart of Fig. 26 together with the functional block diagram of Fig. 23.

[000252] In one embodiment, the control system 260 of the WSC 250 illustrated in Fig. 23 may require authorization to access the warranty data structure 290 of the supplier 280 in order to access entity-specific warranty data on products provided by the supplier 280. If authorization is required to access data in the warranty data structure 290, the control system 260 may obtain authorization codes from the universal data structure 270 which may permit the control system 260 to use, or otherwise access, the warranty data structure 290 of the supplier. For example, the universal data structure 270 may store, or otherwise make accessible, unique authorization codes as authorization data 274 which may be used to access respective warranty data structures of the plurality of suppliers. Upon identifying an authorization code corresponding to the specific product for which warranty verification is sought, the control system 260 may be configured to use the authorization code to access the warranty data structure 290 of the supplier 280.

[000253] During the operations of the remote artificial intelligence-assisted electronic warranty verification session illustrated in Fig. 26, the control system 260 of the WSC 250 illustrated in Fig. 23 may be configured to perform the exemplary network communications illustrated in Fig. 25. In one embodiment, the external data access module 2167 illustrated in Fig. 23 may, at Step 2600, transmit an authorization code 2222 for accessing the link to the warranty data structure 290 to the supplier 280. At Step 2601 , the external data access module 2167 may receive authorization permission 2224 to access the link to the warranty data structure 290 from the supplier 280. Upon receiving the authorization permission 2224 to access the link, the external data access module 2167 may access the link to the warranty data structure 290 of the supplier 280 at Step 2602. Accessing the link to the warranty data structure 290 may include transmitting a remote lookup request 2226 to the supplier 280. Once access to the warranty data structure 290 has been authorized, the external data access module 2167 and/or the external data processing module 2168 may perform a remote lookup of the specific product in the warranty data structure 290 of the supplier 280 at Step 2604 and receive a warranty coverage indication 2228 from the supplier 280 at Step 2606. [000254] Some embodiments of the present disclosure may involve receiving a warranty coverage indication from the warranty data structure of the supplier. As used herein, the term warranty coverage indication may relate to any information, such as entity-specific warranty information, which may describe, or otherwise be used to identify the warranty eligibility of the specific product for which warranty verification is sought. The warranty coverage indication may be received, or otherwise accessed by, the warranty control center from the supplier. In a general sense, the receiving of a warranty coverage indication from the warranty data structure of the supplier may occur in a manner similar to the receiving of a warranty coverage indication, as previously discussed.

[000255] Referring back to the functional block diagram illustrated in Fig. 23, together with the flow chart of Fig. 24, the control system 260 of the WSC 250 may be configured to receive a warranty coverage indication 2128 from the warranty data structure 290 of the supplier 280 via at least one network 240 at Step 2506. For example, once the warranty data structure 290 of the supplier 280 has been accessed at Step 2502, and the external data access module 2167 and/or the external data processing module 2168 has performed a remote lookup of the specific product in the warranty data structure 290 of the supplier 280 at Step 2504, the second processing unit 264 may be configured to receive a warranty coverage indication 2128 from the warranty data structure 290 of the supplier 280 at Step 2506.

[000256] In one embodiment, the external data access module 2167 and/or the external data processing module 2168 of the control system 260 may compare the specific product information obtained from the universal data structure 270 to product data 291 in the warranty data structure 290 in order to determine corresponding customer data 292 on the entity seeking warranty verification and/or warranty data 293 related to said product. In one example, the warranty data 293 may relate to warranty eligibility time, such as time remaining in a warranty period and/or options for addressing problems covered under a warranty for the specific product. In another embodiment, the external data processing module 2168 may be configured to process and/or analyze information received from the warranty data structure 290 of the supplier 280 in order to determine the extent to which the specific product for which warranty verification is sought might be covered.

[000257] Some embodiments of the present disclosure may involve transmitting to the entity an indication of warranty coverage. As used herein, the term indication of warranty coverage may relate to any warranty eligibility information which may be provided to an entity seeking warranty verification of a specific product. The indication of warranty coverage may indicate the extent to which the specific product for which warranty verification is sought is covered, for example an instruction on how to achieve a warranty-related remedy. Alternatively, the indication of warranty coverage may indicate that the specific product for which warranty verification is sought is not covered under the warranty for the specific product. An indication that a product sought is not covered under warranty may include information on how to fix the product or may offer an alternative product to purchase.

[000258] The indication of warranty coverage may be transmitted from the warranty control center to the entity’s mobile communications device (or to any computing device associated with the entity) based on information received from the supplier of the specific product. In a general sense, the indication of warranty coverage transmitted to an entity seeking warranty verification may be similar to aspects of the previously discussed warranty coverage indication received from the supplier. Moreover, transmitting an indication of warranty coverage to the entity seeking warranty verification may occur in a manner similar to transmitting the instruction to an entity to capture at least one product image of a specific product or an image of a purchase receipt for the specific product, as previously discussed.

[000259] Fig. 27 illustrates a functional block diagram in which the control system 260 of WSC 250 is configured to transmit an indication of warranty coverage 2120 with respect to a product for which warranty verification is sought to the entity’s mobile communications device 230 during certain operations of the remote artificial intelligence-assisted electronic warranty verification session. Fig. 28 is a flow chart illustrating exemplary operations of the remote artificial intelligence-assisted electronic warranty verification session related to warranty coverage, as depicted in Fig. 27. Figs. 29A-29D illustrate exemplary interactive applications relating to the remote artificial intelligence-assisted electronic warranty verification session displayed on an entity’s mobile communications device, consistent with at least one embodiment of the present disclosure.

[000260] The control system of the WSC 250 illustrated in Fig. 27 may be configured to transmit an indication of warranty coverage 2120 with respect to a product for which warranty verification is sought to the entity’s mobile communications device 230 via at least one network during certain operations of the remote artificial intelligence-assisted electronic warranty verification session. For example, the WSC 250 may be configured to receive a warranty coverage indication from the warranty data structure 290 of the supplier 280 at Step 2702 of Fig. 28, determine the scope of the warranty coverage indication obtained from the warranty data structure 290 of the supplier 280 at Step 2704, and transmit to the entity 220 an indication of warranty coverage 2120 at Step 2706.

[000261] In one embodiment, the indication of warranty coverage 2120 may be presented via an interactive application on a mobile communications device 230, as depicted in Figs. 29A-29D. The interactive application may enable the entity 220 to obtain an indication of warranty coverage. Optionally, the interactive application may enable the entity 220 to take the appropriate next steps in addressing the issue for which warranty verification was sought. In the example illustrated in Fig. 29A, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 may depict the results of the previously conducted image analysis. The results of the image analysis may list the specific product for which warranty verification is sought, the purchase date, and product identifying information. The results of the image analysis may also include additional fields which may enable the entity 220 to input further information which may be required by the WSC 250 and/or the supplier 280 to proceed with the warranty verification process.

[000262] In one embodiment, the indication of warranty coverage 2120 may include a conclusion of coverage under the warranty which may indicate the extent to which the specific product for which warranty verification is sought may be covered under the warranty. In the example illustrated in Fig. 29B, the interactive application visually displayed on the display unit 231 of the entity’s mobile communications device 230 contains an indication of warranty coverage 2120. In this example, the indication of warranty coverage 2120 indicates that the product for which support is sought is covered under the warranty (e.g., Thank You! Based on your information you are eligible for service.”).

[000263] In another embodiment of the present disclosure, the indication of warranty coverage may include an instruction on how to achieve a warranty-related remedy. As disclosed herein, the instruction on how to achieve a warranty-related remedy may include instructions on how to return the product, where to return the product, how to obtain replacement parts, how to receive repair services, and/or additional required submissions to receive compensation, replacement parts, or repair services. In the example illustrated in Fig. 29B, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 contains an indication of warranty coverage 2120 which includes a conclusion of coverage under the warranty as well as an instruction directing the entity to watch their email for next steps (“Please watch your email for a free service voucher.”).

[000264] In another embodiment of the present disclosure, the warranty coverage indication may include an authorization to collect warranty compensation from the supplier. As disclosed herein, an authorization to collect warranty compensation from the supplier may relate to permission given by the supplier to the warranty service center to allow the entity to repair, exchange, or otherwise be reimbursed for a product that does not function as originally described or intended. In the example illustrated in Fig. 29C, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 contains an indication of warranty coverage 2121 which includes a conclusion of coverage under the warranty as well as an instruction to click the widget 236 to collect compensation from the supplier (e.g., “Please click below to collect warranty compensation from the supplier.).” Thus, the entity 220 may receive, from the WSC 250, authorization to collect warranty compensation from the supplier 280.

[000265] In another embodiment of the present disclosure, the warranty coverage indication may include a conclusion of non-coverage. The conclusion of non-coverage may indicate that the specific product for which warranty verification is sought is not covered under the warranty for the specific product. An indication that a product sought is not covered under warranty may include information on how to fix the product or may offer an alternative product to purchase. In the example illustrated in Fig. 29D, the interactive application visually displayed on the display unit 231 of the mobile communications device 230 contains an indication of warranty non-coverage 2122 indicating that the product for which support is sought is not covered under the warranty as well as an instruction to click the widget 236 to collect compensation from the supplier (e.g., “Please click below to access the supplier’s store for alternative products to purchase.).”

[000266] Accordingly, the systems, methods, and non-transitory computer readable medium disclosed herein may relate to artificial intelligence-assisted electronic warranty verification operations in which the warranty service center may remotely perform product warranty verification in an automated fashion using artificial intelligence. The operations may relate to a plurality of simultaneous artificial intelligence-assisted electronic warranty verification sessions with a plurality of entities. The techniques and/or operations disclosed herein may establish a self- service mechanism in which human support from the warranty verification center is not required. For example, an interactive self-service application accessible via the entity’s mobile communications device may be utilized to during the warranty verification session. The remote artificial intelligence-assisted electronic warranty verification operations may be useful for increasing the accuracy of warranty identification outcomes, increasing the accuracy warranty eligibility verification, reducing the frequency of incorrect warranty coverage indications, limiting the incidence of communication with customer support assistant, shortening consumer wait time, and improving customer satisfaction and independence.

[000267] As used throughout this disclosure, the terms “processor,” “computer,” “control system,” “controller,” “control unit,” “processing unit,” “computing unit,” and/or “processing module” should be expansively construed to cover any kind of electronic device, component, or unit with data processing capabilities, including, by way of a non-limiting example, a personal computer, a wearable computer, a tablet, a smartphone, a server, a computing system, a cloud computing platform, a communication device, a processor, possibly with embedded memory, a single core processor, a multi core processor, a core within a processor, any other electronic computing device, or any combination of the above. The operations, in accordance with the teachings disclosed herein, may be performed by a control system specially constructed or programmed to perform the described functions.

[000268] As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “unit,” "circuit," "module" or "system." Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.

[000269] As described hereinabove and shown in the associated figures, the present disclosure provides warranty verification techniques, systems, and methods, for expeditiously verifying a products warranty during an automated self-service session using artificial intelligence to analyze a product image and/or a purchase receipt. While particular embodiments of the disclosure have been described, it will be understood, however, that the disclosure is not limited thereto, since modifications may be made by those skilled in the art, particularly in light of the foregoing teachings. As will be appreciated by the skilled person, the disclosure can be carried out in a great variety of ways, employing more than one technique from those described above, all without exceeding the scope of the claims.

[000270] The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed herein. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Although certain aspects of the disclosed embodiments are described as being stored in memory or data structure, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices, e.g., hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, Ultra HD Blu-ray, or any other optical drive media.

[000271] Computer programs based on the written description and disclosed methods are within the skills of an experienced developer. The various programs or program modules may be created using any of the techniques known to one skilled in the art or may be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets. Additionally, it is to be understood that the technology disclosed herein may be implemented by software which may be integrated into existing computer readable mediums and/or systems of technical support centers, warranty support centers, and/or organizations and which may replace said software or work in parallel thereto.

[000272] Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.