Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR TIME-SHARING AND TIME-SHIFTING INTERACTIONS USING A SHARED ARTIFICIAL INTELLIGENCE PERSONALITY
Document Type and Number:
WIPO Patent Application WO/2021/257106
Kind Code:
A1
Abstract:
Systems and methods are described for time-sharing interactions using a shared artificial intelligence personality (AIP) incorporated within multiple human interaction entities (HIEs). An AIP is an understanding construct that may control a variety of communication experiences to support a sense of ongoing social connectedness. An AIP may be instantiated within two or more HIEs that interact with humans in a human, cartoon, or pet-like manner. HIEs may include robots, robotic pets, toys, simple-to-use devices, and graphical user interfaces. The AIP is updated based on interactions sensed by the HIEs as well as knowledge of historical and ongoing events. The systems may provide multiple users with intuitive machine companions that exhibit an expert knowledge base and a familiar, cumulative personality. HIEs may continue to operate without interruption in the presence of interruptions, and/or the absence of one or more human participants, allowing participants to "time-share" and/or "time-shift" their sense of connectedness.

Inventors:
PUBLICOVER NELSON (US)
MARGGRAFF LEWIS JAMES (US)
MARGGRAFF MARY JO (US)
Application Number:
PCT/US2020/043781
Publication Date:
December 23, 2021
Filing Date:
July 27, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KINOO INC (US)
International Classes:
G06F3/01; G06F3/00
Foreign References:
US20160042648A12016-02-11
US20160073059A12016-03-10
US20090055019A12009-02-26
US20100115427A12010-05-06
KR20190133328A2019-12-03
Attorney, Agent or Firm:
ENGLISH, William, A. (US)
Download PDF:
Claims:
We claim:

1. A method to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating a first artificial intelligence personality into the first human interaction entity, comprising installing the first artificial intelligence personality with the first processor such that the first artificial intelligence personality interacts with the first human via the one or more first interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; computing, with one or more artificial intelligence processors, a second artificial intelligence personality comprising a single cumulative personality based at least in part on the first artificial intelligence personality and the interaction data; and instantiating, with the second processor, the second artificial intelligence personality into the second human interaction entity.

2. The method of claim 1, wherein the first human interaction entity is one of a robot, a robotic pet, a toy, an avatar, a displayed image, a virtual reality object, an augmented reality object, a hologram, and a hologram -like projection.

3. The method of claim 1, wherein the one or more first sensors comprise one or more of at least one environmental sensor for measuring one or more elements within the environment of the first human interaction entity, or at least one human interaction sensor for measuring interactions between the first artificial intelligence entity and the first human.

4. The method of claim 3, wherein the at least one environmental sensor comprises one or more cameras, light sensors, thermal sensors, motion sensors, accelerometers, global positioning system (GPS) transceivers, microphones, infrared (IR) sensors, galvanometric sensors, pressure sensors, switch sensors, magnetic sensors, proximity sensors, date and time clocks, Bluetooth transceivers, and Wi-Fi transceivers.

5. The method of claim 3, wherein the at least one human interaction sensor comprises one or more cameras, thermal sensors, motion sensors, accelerometers, microphones, infrared (IR) sensors, galvanometric sensors, heart rate sensors, electrocardiogram sensors, electrooculogram sensors, electroencephalogram sensors, pulse oximeters, pressure sensors, magnetic sensors, computer mice, joysticks, keyboards, touch screens, and proximity sensors.

6. The method of claim 1, wherein the interaction data comprise one or more of: data acquired from the one or more environmental sensors, data acquired from the one or more human interaction sensors, physical states of one or more humans within the vicinity of the first human interaction entity, physiological states of one or more humans within the vicinity of the first human interaction entity, cognitive states of one or more humans within the vicinity of the first human interaction entity, emotional states of one or more humans within the vicinity of the first human interaction entity, changes in the physical, physiological, cognitive, or emotional states of one or more humans within the vicinity of the first human interaction entity, one or more spoken words within the vicinity of the first human interaction entity, one or more recognized objects within images acquired by the first human interaction entity, and one or more gestures performed by one or more humans within the vicinity of the first human interaction entity.

7. The method of claim 1, wherein the one or more first interaction output devices comprise one or more of video display devices, hologram display devices, holographic-like projectors, speakers, propulsion systems, servos, motors, magnetic field controllers, orientation controllers, haptic controllers, light pointing devices, switch controllers, and controllable tactile surfaces.

8. The method of claim 1, wherein additional artificial personalities are computed using data that include additional interaction data from one or more of the first human interaction entity, the second human interaction entity, and additional human interaction entities.

9. The method of claim 8, wherein the additional artificial personalities are computed by one or more of the first human interaction processor, the second human interaction processor, additional human interaction processors, and one or more connected artificial intelligence processors.

10. The method of claim 9, wherein, once computed, the additional artificial intelligence personalities are instantiated into one or more of the first human interaction processor, the second human interaction processor, and additional human interaction processors.

11. The method of claim 1, wherein the interaction data include an indication of a first human desire to disconnect from interacting with the second human.

12. The method of claim 11, wherein the first human desire to disconnect from interacting with the second human is not known by the second human.

13. The method of claim 11, wherein the second artificial intelligence personality indicates, with one or more of the second interaction output devices, a second artificial intelligence personality desire to maintain connectedness with the second human in the absence of the first human.

14. The method of claim 11, wherein the second artificial intelligence personality indicates, with one or more of the second interaction output devices, a first human desire to disconnect from interacting with the second human.

15. The method of claim 11, wherein the interaction data include one of a time and a range of times, when the first human anticipates reconnecting with the second human.

16. The method of claim 15, wherein the second artificial intelligence personality maintains connectedness with the second human in a manner that anticipates that the first human reconnects with the second human after the time or range of times specified by the first human.

17. A method to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; determining, by the first processor based on the interaction data, a first human desire to stop interacting with the second human; transmitting, from the first processor to the second processor, the first human desire; and initiating, by the artificial intelligence personality using the one or more second interaction output devices, one or more pre-determined activities with the second human.

18. The method of claim 17, wherein the one or more pre-determined activities include one or more of playing a game, participating in play with a toy, reading a story, watching a video, critiquing a movie, conversing, initiating a conversation with one or more other humans, performing a learning experience, helping to write a communication, drawing a picture, coding a program, paying a bill, planning an activity, reminding about upcoming events, building a virtual object, helping to construct a real object, and instructing to go to bed.

19. A method to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; determining, by the first processor based on the interaction data, one or more first human directives to the second human; transmitting, from the first processor to the second processor, the one or more first human directives; and initiating, by the artificial intelligence personality using the one or more second interaction output devices, the one or more first human directives with the second human.

20. The method of claim 19, wherein the one or more first human directives include one or more of playing a game, participating in play with a toy, reading a story, watching a video, conversing, initiating conversation with one or more additional humans, performing a learning experience, helping to write a communication, helping to construct a drawing, coding a program, paying a bill, reminding about upcoming events, building a virtual object, helping to construct a real object, and instructing to go to bed.

21. A method to share an artificial intelligence personality among multiple human interaction entities comprising: instantiating, with a first processor operatively coupled to a first human interaction entity, a first artificial intelligence personality into the first human interaction entity; instantiating, with a second processor operatively coupled to a second human interaction entity, the first artificial intelligence personality into the second human interaction entity; acquiring artificial intelligence personality update data from one or more sensors operatively coupled to the first human interaction entity; determining a second artificial intelligence personality based at least in part on the first artificial intelligence personality and the artificial intelligence personality update data; instantiating, with the first processor, the second artificial intelligence personality into the first human interaction entity; transmitting to the second human interaction entity, the second artificial intelligence personality update data, and instantiating, with the second processor, the second artificial intelligence personality into the second human interaction entity.

22. The method of claim 21, wherein one or both of the first and second human interaction entities is one of a robot, a robotic pet, a toy, an avatar, a displayed image, a virtual reality object, an augmented reality object, a hologram, and a hologram -like projection.

23. The method of claim 21, wherein the one or more sensors comprise one or more of device sensors comprising sensors measuring one or more elements within the environment of the one or more device sensors, and human interaction sensors comprising sensors measuring interactions with the one or more humans.

24. The method of claim 23, wherein the device sensors comprise one or more cameras, light sensors, thermal sensors, motion sensors, accelerometers, global positioning system (GPS) transceivers, microphones, infrared (IR) sensors, galvanometric sensors, pressure sensors, switch sensors, magnetic sensors, proximity sensors, date and time clocks, Bluetooth transceivers, and Wi-Fi transceivers.

25. The method of claim 23, wherein the human interaction sensors comprise one or more cameras, thermal sensors, motion sensors, accelerometers, microphones, infrared (IR) sensors, galvanometric sensors, heart rate sensors, electrocardiogram sensors, electrooculogram sensors, electroencephalogram sensors, pulse oximeters, pressure sensors, magnetic sensors, computer mice, joysticks, keyboards, touch screens, and proximity sensors.

26. The method of claim 23, wherein the artificial intelligence personality update data comprise one or more of: data acquired from one or more of the device sensors, data acquired from one or more of the human interaction sensors, one or more differences between the first artificial intelligence personality and the second artificial intelligence personality, an entire data set representing an artificial intelligence personality, and derived data, comprising one or more of: physical states of one or more of the humans within the vicinity of the first human interaction entity, physiological states of one or more of the humans within the vicinity of the first human interaction entity, cognitive states of one or more of the humans within the vicinity of the first human interaction entity, emotional states of one or more of the humans within the vicinity of the first human interaction entity, changes in the physical, physiological, cognitive, or emotional states of one or more humans within the vicinity of the first human interaction entity, one or more spoken words within the vicinity of the first human interaction entity, one or more recognized objects within images, and one or more gestures performed by one or more humans within the vicinity of the first human interaction entity.

27. The method of claim 21, wherein the human interaction entities include one or more actuators comprising one or more video display devices, hologram display devices, holographic-like projectors, speakers, propulsion systems, servos, motors, magnetic field controllers, orientation controllers, haptic controllers, light pointing devices, switch controllers, and controllable tactile surfaces.

28. The method of claim 21, wherein additional artificial personalities are determined from one or more of interaction data from the first human interaction entity, second interaction data from the second human interaction entity, and additional interaction data from additional human interaction entities.

29. The method of claim 28, wherein the additional artificial personalities are computed by one or more of the first human interaction entity, the second human interaction entity, the additional human interaction entities, and one or more remote processors.

30. The method of claim 29, wherein, once computed, the additional artificial intelligence personalities are transmitted to one or more of the first human interaction entity, the second human interaction entity, and additional human interaction entities.

31. The method of claim 21, wherein instantiating the first human interaction entity and the second human interaction entity with the second artificial intelligence personality occurs substantially instantaneously, within less than one second; or following a substantial delay, greater than or equal to one second.

32. The method of claim 21, wherein the first artificial intelligence is maintained in human interaction entities during one or more of interrupted transmission, erroneous transmission, computational delays, and propagation delays.

33. The method of claim 21, wherein a relative difference in time between a transmission of update date and a receiving of update data takes into account one or more of: relative changes in time due to a velocity of a sending device relative to a receiving device as described by Einstein’s special theory of relativity, and relative changes in time due to an acceleration of the sending device relative to the receiving device as described by Einstein’s general theory of relativity.

34. A system to share an artificial intelligence personality among multiple human interaction entities comprising: a first human interaction entity instantiated with a first artificial intelligence personality; a second human interaction entity instantiated with the first artificial intelligence personality; one or more sensors operatively coupled to the first human interaction entity; a coordinating processor for determining a second artificial intelligence personality based at least in part on the first artificial intelligence personality and update data acquired from the one or more sensors operatively coupled to the first human interaction entity; a first processor instantiating the first human interaction entity with the second artificial intelligence personality; a communication interface for transmitting sufficient data to compute the second artificial intelligence personality from the first artificial intelligence personality to the second human interaction entity; and a second processor instantiating the second human interaction entity with the second artificial intelligence personality.

35. The system of claim 34, wherein additional artificial personalities are determined from one or more inputs from one or more of: the one or more sensors operatively coupled to the first human interaction entity, one or more sensors operatively coupled to the second human interaction entity interacting with a second human, and one or more sensors operatively coupled to additional human interaction entities interacting with additional humans.

36. A system to share an artificial intelligence personality among multiple human interaction entities comprising: a first human interaction entity instantiated with a first artificial intelligence personality; a second human interaction entity instantiated with the first artificial intelligence personality; a coordinating processor configured to compute artificial intelligence personalities; one or more sensors operatively coupled to at least one of the first and second human interaction entities communicating update data to the coordinating processor; wherein the coordinating processor is configured to compute a new artificial intelligence personality based at least in part on the first artificial intelligence personality and the update data; and an interface for transmitting one of the new artificial intelligence personality and sufficient data components to compute the new artificial personality, from the coordinating processor to the multiple human interaction entities, whereupon the first and second human interaction entities are instantiated with the new artificial intelligence personality.

37. The system of claim 36, wherein at least one of the first and second human interaction entities is one of a robot, a robotic pet, a toy, an avatar, a displayed image, a virtual reality object, an augmented reality object, a hologram, and a hologram -like projection.

38. The system of claim 36, wherein the one or more sensors comprise one or more of device sensors comprising sensors measuring one or more elements within the environment of the one or more device sensors, and human interaction sensors comprising sensors measuring interactions with the one or more humans.

39. The system of claim 38, wherein the device sensors comprise one or more cameras, light sensors, thermal sensors, motion sensors, accelerometers, global positioning system (GPS) transceivers, microphones, infrared (IR) sensors, galvanometric sensors, pressure sensors, switch sensors, magnetic sensors, proximity sensors, date and time clocks, Bluetooth transceivers, and Wi-Fi transceivers.

40. The system of claim 38, wherein the human interaction sensors comprise one or more cameras, thermal sensors, motion sensors, accelerometers, microphones, infrared (IR) sensors, galvanometric sensors, heart rate sensors, electrocardiogram sensors, electrooculogram sensors, electroencephalogram sensors, pulse oximeters, pressure sensors, magnetic sensors, computer mice, joysticks, keyboards, touch screens, and proximity sensors.

41. A system to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: a first human interaction entity configured to be located in proximity to a first human, comprising a first electronic device that includes a first processor, a first entity communication interface, one or more first interaction output devices operatively coupled to the first processor that actuates interactions between the first human interaction entity and the first human, and one or more first sensors operatively coupled to the first processor that sense interactions between the first human interaction entity and the first human, the first human interaction entity configured to instantiate a first artificial intelligence personality; a second human interaction entity configured to be located in proximity to a second human comprising a second electronic device that includes a second processor, a second entity communication interface, one or more second interaction output devices operatively coupled to the second processor that actuates interactions between the second human interaction entity and the second human, and one or more second sensors operatively coupled to the second processor that sense interactions between the second human interaction entity and the second human; one or more connected artificial intelligence processors that receive interaction data acquired during an interaction between the first human interaction entity and the first human transmitted via the first entity communication interface, the one or more connected artificial intelligence processors configured to determine a second artificial intelligence personality based at least in part on the first artificial intelligence personality and the interaction data; and a coordinating processor configured to send data to compute the second artificial intelligence personality from the one or more connected artificial intelligence processors to the second processor that instantiates the second human interaction entity with the second artificial intelligence personality, wherein the second artificial intelligence personality is configured to be instantiated within the second human interaction entity to indicate, via one or more second interaction output devices, a desire by the first human to connect with the second human, wherein the second artificial intelligence personality and the first artificial intelligence personality comprise a single cumulative personality that is updated by the one or more connected artificial intelligence processors based at least in part on the interaction data acquired during interactions between the first human and the first artificial intelligence personality, and wherein the first processor is configured to instantiate the first artificial intelligence personality into the first human interaction entity by installing the first artificial intelligence personality such that the first artificial intelligence personality interacts with the first human via the one or more first interaction output devices.

42. The system of claim 41, wherein the data sent by the coordinating processor to instantiate the second artificial intelligence personality comprise one or more of: data acquired from one or more of the first sensors; physical states of one or more humans within the vicinity of the first human interaction entity; physiological states of one or more humans within the vicinity of the first human interaction entity; cognitive states of one or more humans within the vicinity of the first human interaction entity; emotional states of one or more humans within the vicinity of the first human interaction entity; changes in the physical, physiological, cognitive, or emotional states of one or more humans within the vicinity of the first human interaction entity; one or more spoken words within the vicinity of the first human interaction entity; one or more recognized objects within images acquired by the first human interaction entity; one or more gestures performed by one or more humans within the vicinity of the first human interaction entity; one or more differences between the first artificial intelligence personality and the second artificial intelligence personality; and an entire data set representing an artificial intelligence personality.

43. A system to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: a first human interaction entity configured to be located in proximity to a first human, comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor, the first human interaction entity configured to install a first artificial intelligence personality with the first processor such that the first artificial intelligence personality interacts with the first human via the one or more first interaction output devices; a second human interaction entity configured to be located in proximity to a second human comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; one or more connected artificial intelligence processors that receive interaction data acquired during an interaction between the first human interaction entity, the one or more connected artificial intelligence processors configured to determine a second artificial intelligence personality based at least in part on the first artificial intelligence personality and the interaction data, the system configured to: acquire, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; compute, with one or more artificial intelligence processors, a second artificial intelligence personality comprising a single cumulative personality based at least in part on the first artificial intelligence personality and the interaction data; and instantiate, with the second processor, the second artificial intelligence personality into the second human interaction entity.

44. A system to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: a first human interaction entity configured to be located in proximity to a first human, comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor, the first human interaction entity configured to install a first artificial intelligence personality with the first processor such that the first artificial intelligence personality interacts with the first human via the one or more first interaction output devices; a second human interaction entity configured to be located in proximity to a second human comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; one or more connected artificial intelligence processors that receive interaction data acquired during an interaction between the first human interaction entity, the one or more connected artificial intelligence processors configured to determine a second artificial intelligence personality based at least in part on the first artificial intelligence personality and the interaction data, the system programmed to perform the methods of any claims 1-33.

45. A method to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, first interaction data from the one or more first sensors; identifying, by the first processor, from the first interaction data, an action to be performed by the second human interaction entity upon sensing a condition for performing the action; transmitting, from the first processor to the second processor, indicators of the action and the condition; after receiving the indicators, acquiring, during an interaction between the second human interaction device and the second human, second interaction data from the one or more second sensors; and identifying, by the second processor, from the second interaction data, the condition for performing the action.

46. The method of claim 45, wherein the action is performed with the one or more second interaction output devices.

47. The method of claim 46, wherein an acknowledgement that the action was performed is transmitted from the second processor to the first processor.

48. The method of claim 45, wherein the action comprises one or more of playing an audio clip, playing a video clip, playing an audiovisual clip, displaying one or more images, making a move within a game, showing a text message, showing the contents of an email, displaying a hologram, displaying an image on an augmented reality display, showing an image on a virtual reality display, playing a song, displaying a document, showing a spreadsheet, generating an electronic signature, providing an access code, loading a software application, providing a link to additional information, controlling a physical devices, identifying a location, providing contact information, producing a calendar event, generating an alert, providing an attached file, and transmitting from the second processor to the first processor an acknowledgement that the condition was identified.

49. The method of claim 45, wherein additional actions to be performed upon sensing additional conditions for performing the additional actions are determined during additional interactions between the first human interaction device and the first human; and indicators of the additional actions and additional conditions are transmitted to the second processor.

50. The method of claim 45, wherein additional human actions to be performed upon sensing additional human conditions for performing the additional actions are determined during human interactions between an additional interaction device and an additional human; and indicators of the additional human actions and additional human conditions are transmitted to the second processor.

51. The method of claim 45, wherein the condition for performing the action comprises identifying one or more of a specific time, a recurring time, an elapsed time, the second human performing an activity, the second human arriving at a geographic location, the second human being at a location, the second human observing a location, the second human pointing toward a location, a state within a game, the second human generating a verbal statement, the second human generating a gesture, the second human generating a facial expression, the second human being in the presence of another person, the second human being in the presence of a pet, the second human being in the vicinity of a device, the second human receiving information from a third human, the occurrence of a world event, a health condition of the second human, the second human receiving a message concerning a particular topic, the second human generating text, and a contact established by the second human with a third human.

52. The method of claim 51, wherein the one or more of specific time to perform the action, recurring time to perform the action, and elapsed time to perform the action are transmitted to one or more of a calendar application, an alarm clock application and a user- notification application.

53. The method of claim 45, wherein the action is performed based on identifying the condition from one or more of additional first interaction data from the first human, additional second interaction data from the second human and additional interaction data from additional humans.

54. The method of claim 45, wherein the condition for performing the action is not known by the second human.

55. A method to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; classifying, during one or more interactions between the first human interaction device and the first human, a first interaction topic; transmitting, from the first processor to the second processor, the one or more interactions classified as being associated with the first interaction topic; and presenting collectively, with the one or more second interaction output devices, the one or more interactions classified as being associated with the first interaction topic.

56. The method of claim 55, wherein the one or more interactions classified as being associated with the first interaction topic are further sorted according to one or more of times the interactions were generated, interaction content sorted alphabetically, interaction content sorted numerically, and a pre-defmed set of interaction content priorities.

57. The method of claim 55, wherein an additional interaction topic is classified, during the one or more interactions between the first human interaction device and the first human; and one or more additional topic interactions classified as being associated with the additional interaction topic are transmitted to the second processor.

58. The method of claim 55, wherein an additional human interaction topic is classified, during one or more additional human interactions between an additional human interaction device and an additional human; and one or more additional human interactions classified as being associated with the additional human interaction topic are transmitted to the second processor. 59. The method of claim 58, wherein the interactions and additional human interactions are sorted according to the first interaction topic and additional human interaction topic sorted alphabetically, the first interaction topic and additional human interaction topic sorted numerically, occurrence times of the one or more interactions and additional human interactions, identities of the first human and the additional human sorted alphabetically, interaction content sorted alphabetically, interaction content sorted numerically, and a pre-defmed set of interaction content priorities.

60. A method to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; initiating, by the second artificial intelligence personality using the one or more second interaction output devices, a first interaction directed at the second human; transmitting, using the second electronic device, first interaction indicators to the first processor; presenting, using the one or more first interaction output devices, the first interaction indicators to the first human; sensing, using the one or more first sensors, a reaction by the first human; and transmitting, using the first electronic device, reaction indicators to the second processor.

61. The method of claim 60, wherein the reaction by the first human is one or more of approval of the first interaction, disapproval of the first interaction, happiness, sadness, surprise, disbelief, fear, anger, excitement, anticipation, and vigilance.

62. The method of claim 60, wherein the reaction by the first human causes an additional interaction initiated by the second artificial intelligence personality directed at the second human.

63. The method of claim 62, wherein the additional interaction results in one or more steps to overturn the first interaction.

64. The method of claim 60, wherein performing the first interaction depends on one or more of an age of the second human, a skill level of the second human, a target skill level of the second human, a skill level of the first human, a pre-determined skill level, a location of the second human, a preference of the second human, a time of the first interaction, a presence of other humans in the environment of the second human, available computational resources, and available computational processing time.

Description:
SYSTEMS AND METHODS FOR TIME-SHARING AND TIME-SHIFTING INTERACTIONS USING A SHARED ARTIFICIAL INTELLIGENCE PERSONALITY

RELATED APPLICATION DATA

The present application claims priority to co-pending U.S. application Serial No. 16/902,168, filed June 15, 2020, and benefit of U.S. provisional application Serial No. 63/043,060, filed June 23, 2020, the entire disclosures of which are expressly incorporated by reference herein.

FIELD OF THE INVENTION

The present invention relates generally to systems and methods for substantially sharing an artificial intelligence personality (AIP; artificial personality, AP; artificial intelligence agent, (AIA); or artificial human companion, AHC) among multiple human interaction entities (HIEs). The systems utilize techniques within the fields of computer programming, machine learning [including artificial intelligence (AI), artificial neural networks (ANNs), convolution neural networks (CNNs), and deep learning], human- machine interfaces (HMIs), telecommunications, and cognitive sciences including psychology, linguistics and learning. The systems may provide two or more users with intuitive machine companions that exhibit an expert knowledge base and a familiar, cumulative personality to motivate emotional and cognitive exchanges.

BACKGROUND

As the world moves toward an increasing reliance on distance communication (i.e., interpersonal communication in which the physical gap between participants is beyond the physiological limits of unaided human perception), there is a progressive need to make such interactions more efficient, effective, and socially acceptable. Currently, there is an extensive range of devices and software to facilitate distance communications. These include a wide span of telephonic devices, video conferencing, smart televisions, image exchange tools, texting, chat, instant messaging, paging devices, notification tools, remote classrooms, electronic billboards, and so on.

Notwithstanding the utility of such tools and applications, considerable time may be spent remaining connected, even when there is a desire to perform other tasks, or to at least “time-share” between remaining remotely connected (e.g., to a child or elderly individual) and performing those other activities or tasks (e.g., during work or travel). Those activities may also include “time-sharing” among two or more distance communication participants (e.g., a parent interacting with multiple children, a boss interacting with multiple employees, among multiple colleagues, etc.). Improvements to devices and processes that facilitate interacting at a distance have the potential to impact most major aspects of modem life including work, play, services support, education, and maintaining family and social connectedness.

During busy times, it is all too common to be interrupted or have a need to disconnect from an exchange with a remote participant only to have the interruption or abrupt disconnection perceived by that participant as disinterest. Such perceptions may contribute to feelings of social isolation and/or their being less important than the source of the interruption. This is particularly important within interactions among parents and their children, grandparents and grandchildren, other relatives, exchanges that include mentally challenged individuals, neighbors, colleagues, and close friends. The situation may be further exacerbated if one participant, in particular, repeatedly terminates most social exchanges.

If such exchanges involve a large number of people, then interruptions or abrupt disconnections by an individual might go unnoticed. However, when exchanges involve a smaller number of participants, including just two, then the interruption or disconnection may appear obvious. Discernible interruptions may arise despite a desire on the part of an individual not to reveal to the one or more other participants within an exchange that they are disconnecting, especially if only briefly.

Along similar lines, a parent or guardian may, for example, wish to have a distant child perform one or more tasks that require an extended period of guidance and/or monitoring. Due to other commitments, the parent may have insufficient time to remain continuously connected to the child to instruct, track, and/or compliment the child during each phase of task performance. A sense of connectedness, even via distance communication, may have a strong influence on whether tasks are completed successfully, whether there is a shared sense of accomplishment or satisfaction, and/or whether there is cognitive and emotional learning while performing such tasks.

Similar situations, where some level of engagement (but not necessarily fully focused attention) may be required during the performance of one or more tasks during portions of exchanges between teachers and students, supervisors and employees, workers and their colleagues, doctors and patients, as well as a wide range of other service providers and their clients. Such exchanges may benefit not only from an expert knowledge provided by a shared artificial intelligence personality, but also from the ability of an AIP to determine best times to check-in, to reconnect as tasks are performed and/or if stumbling points are encountered.

The need and effectiveness of reconnecting may also be dependent on individual needs. Knowledge of previous successes and/or failures associated with similar tasks, the skill set of an individual, interests, certifications, age, and the anticipated degree of focus as well as other cognitive and emotional factors (both historical and contemporaneous) are examples of considerations that may play a role to determine an optimal check-in frequency and when reconnecting may be most effective. Approaches are needed to effectively and acceptably (by all participants) make use of such knowledge to optimize distance communications.

Frequent and extended interpersonal communication generally benefits most individuals in society. However, there is a recognized particular need for fostering companionship among many lonely, isolated, and/or confined people. Individuals who generally lack social interactions, support, and regular contact from friends, family, or colleagues frequently become depressed; their health suffers; their life span may be reduced; and they sometimes even become suicidal. Groups of people with an increased tendency for suffering from these effects include the elderly, disabled, prisoners, institutionalized or hospitalized individuals, researchers in extreme conditions, and astronauts.

Among other attributes and capabilities, such tools and applications may be used to “time-shift” the exchange of blocks of information. In other words, the process of a sender generating and/or transmitting a packet of information does not need to be temporally aligned to the process of a recipient acquiring that information. For example, the transcribing and subsequent sending of an email or the recording of a voice mail, or even a simple hand-written letter, allows a recipient to review message content at a time (or at multiple times, if desired) of his or her choosing, effectively “time-shifting” the interaction.

Time-shifting the exchange of emails, texts, images and other media is a particularly valuable strategy when a recipient is not available to receive the information. For example, the recipient may be asleep, not near a telecommunications device, or at a location where telecommunications are not available. Further, a recipient may not be able to receive an exchange due to technical limitations. For example, transmissions may be noisy or interrupted, transmissions of large blocks of information may require excessive time, and/or substantial time may be required to transmit information over significant distances and/or via a large number of relay stations. A recipient may also simply choose not to receive or review, or not be aware of, an exchange at a particular time. For example, a recipient may be occupied responding to other exchanges, choose to response only at certain times (e.g., in the evenings for personal emails), or be busy performing other activities.

The ability to time-shift information also facilitates an ability to cluster and consolidate information such that high-level decision-making may be more efficient and effective. Current theories in the fields of cognitive psychology and neuroscience suggest that so-called “working memory” is central to reasoning, and a strong guide to decision making and behavior. It is estimated that working memory (sometimes considered in the context of so-called short-term memory) has a capacity of about four to seven “chunks” of information in young adults (less in older adults and children) lasting about 2-15 seconds, where the range in both capacity and timing depends largely on the type of memory (e.g., language, spatial, number) used to measure retention. It is widely accepted that the prefrontal cortex plays a central role in working memory, particularly for cognitive processing (versus, for example, sensory perception).

Given these limiting capacities for human working memory, a valuable strategy for high-level information processing involves sorting and clustering information into categories and presenting such materials together as “chunks”. Time-shifting communications (whether from local or distant sources) to shift presentation order (e.g., from chronological or sorted according to the individuals who were the sources of information) into specific categories or topic areas enables such “chunking” or clustering strategies during information review. Time-shifting allows input from multiple sources possibly at multiple locations, and generated at multiple times to be considered together and over a single period of time (i.e., more consistent with the functioning of working memory).

Furthermore, searching for related information, or even maintaining a “feeling” that one is not considering all possible sources of information often breaks one’s ability to focus during processes involved with synthesizing new information and/or decision-making. Knowing that topics are being presented in topic clusters (i.e., pre-sorted) avoids needs or perceptions at critical times to search information exchanges to ensure all inputs have been considered regarding a particular topic. Such strategies to categorize and present related materials together may be essential for optimal and efficient executive function, especially when considering complex problems with multiple dimensions.

“Time-shifting”, information “chunking”, and “time-sharing” (i.e., the ability to interact with two or more distance communication participants and/or perform additional activities while interacting with distance communication participants) may be valuable tools to improve the efficiency of distance communications and control, particularly involving activities requiring high-level decision making, complex information synthesis, and/or upper management. Improvements to devices and processes that facilitate effective interacting, particularly distance communications, have the potential to impact most major aspects of modem life including work, play, services support, education, and maintaining family and social connectedness.

New paradigms are required to improve the efficiency, effectiveness, productivity, and socially acceptability of distance communications. Although artificial human companions have not yet fully replaced familiar, supportive social interactions among family members, loved ones, friends, counselors, or colleagues; AIPs instantiated within personal HIEs that are familiar and available to a user at any time may help bridge gaps in time and/or “time-shift” and/or “time-share” interactions when human companions are not available.

SUMMARY

In view of the foregoing, systems and methods are provided herein for substantially sharing an artificial intelligence “personality” (AIP), “character” or “companion” instantiated within two or more human interaction entities (HIEs) implementing a variety of real-time and/or non-real time communication experiences to support a sense of continuous and/or ongoing connectedness. An AIP is an understanding construct that may manage and perform a variety of communication experiences to enhance feelings of connectedness and understanding.

An AIP may be instantiated within two or more HIEs that interact with humans in a human, cartoon, or pet-like manner. In exemplary embodiments, HIEs may include robots, robotic pets, toys, simple-to-use devices, digital assistants, graphical user interfaces and avatars. HIEs may be physical (i.e., solid objects), virtual (i.e., displayed on a screen), or both (interacting simultaneously with a human, or transitioning from one form to another over time). HIE functions may be implemented in the form of a single device that comprises the majority of components necessary for processing, sensing and actuating during human interaction exchanges. Alternatively, HIE functions may be distributed among two or more physical devices that collectively comprise the elements necessary for processing, sensing, and actuating during human interaction exchanges where distributed devices may be referred to as human interaction accessories (HIAs). HIAs may generally, although not necessarily, utilize portable power sources (e.g., one or more batteries, one or more solar panels) and/or be interconnected using wireless communication interfaces and/or protocols (e.g., Wi-Fi, Bluetooth, etc.).

Systems may provide two or more users with machine companions that include an individualized familiarity with each user (enhancing acceptance and believability), an exhibiting of intuitive interactions, a cumulatively acquired personality, an integrated knowledge base, and behaviors to motivate emotional and cognitive exchanges. The AIP may be periodically updated based on human interactions sensed by all, or a subset of, the HIEs as well as knowledge of historical and ongoing events. HIEs may continue to operate without interruption in the present of telecommunications delays or interruptions, and/or the absence of one or more (distant) human participants. The system may improve a sense of connectedness, remove feelings of social isolation, improve learning, enhance enjoyment, and/or allow “time-shifted” exchanges among users.

In accordance with an exemplary embodiment, a method is provided to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating a first artificial intelligence personality into the first human interaction entity, comprising installing the first artificial intelligence personality with the first processor such that the first artificial intelligence personality interacts with the first human via the one or more first interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; computing, with one or more artificial intelligence processors, a second artificial intelligence personality comprising a single cumulative personality based at least in part on the first artificial intelligence personality and the interaction data; and instantiating, with the second processor, the second artificial intelligence personality into the second human interaction entity.

In accordance with another exemplary embodiment, a method is provided to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; determining, by the first processor based on the interaction data, a first human desire to stop interacting with the second human; transmitting, from the first processor to the second processor, the first human desire; and initiating, by the artificial intelligence personality using the one or more second interaction output devices, one or more pre-determined activities with the second human.

In accordance with still another exemplary embodiment, a method is provided to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, interaction data from the one or more first sensors; determining, by the first processor based on the interaction data, one or more first human directives to the second human; transmitting, from the first processor to the second processor, the one or more first human directives; and initiating, by the artificial intelligence personality using the one or more second interaction output devices, the one or more first human directives to the second human.

In accordance with an exemplary embodiment, a method is provided to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; acquiring, during an interaction between the first human interaction device and the first human, first interaction data from the one or more first sensors; identifying, by the first processor, from the first interaction data, an action to be performed by the second human interaction entity upon sensing a condition for performing the action; transmitting, from the first processor to the second processor, indicators of the action and the condition; after receiving the indicators, acquiring, during an interaction between the second human interaction device and the second human, second interaction data from the one or more second sensors; and identifying, by the second processor, from the second interaction data, the condition for performing the action.

In accordance with another exemplary embodiment, a method is provided to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; classifying, during one or more interactions between the first human interaction device and the first human, a first interaction topic; transmitting, from the first processor to the second processor, the one or more interactions classified as being associated with the first interaction topic; and presenting collectively, with the one or more second interaction output devices, the one or more interactions classified as being associated with the first interaction topic. In accordance with still another exemplary embodiment, a method is provided to share an artificial intelligence personality among multiple human interaction entities to support social connectedness between two humans, comprising: providing, in proximity to a first human, a first human interaction entity comprising a first electronic device that includes a first processor, one or more first interaction output devices operatively coupled to the first processor, and one or more first sensors operatively coupled to the first processor; instantiating an artificial intelligence personality into the first human interaction entity, comprising installing the artificial intelligence personality with the first processor such that the artificial intelligence personality interacts with the first human via the one or more first interaction output devices; providing, in proximity to a second human, a second human interaction entity comprising a second electronic device that includes a second processor, one or more second interaction output devices operatively coupled to the second processor, and one or more second sensors operatively coupled to the second processor; instantiating the artificial intelligence personality into the second human interaction entity, comprising installing the artificial intelligence personality with the second processor such that the artificial intelligence personality interacts with the second human via the one or more second interaction output devices; initiating, by the second artificial intelligence personality using the one or more second interaction output devices, a first interaction directed at the second human; transmitting, using the second electronic device, first interaction indicators to the first processor; presenting, using the one or more first interaction output devices, the first interaction indicators to the first human; sensing, using the one or more first sensors, a reaction by the first human; and transmitting, using the first electronic device, reaction indicators to the second processor.

Other aspects and features including the need for and use of the present invention will become apparent from consideration of the following description taken in conjunction with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of the present invention may be derived by referring to the detailed description when considered in connection with the following illustrative figures. In the figures, like-reference numbers refer to like-elements or acts throughout the figures. The presently exemplary embodiments are illustrated in the accompanying drawings, in which: FIG. l is a flowchart illustrating an exemplary sequence of steps to substantially share a single AIP among two or more HIEs.

FIG. 2 illustrates a sequence of steps to re-compute an updated user state based on video input captured by one or more HIE cameras.

FIG. 3 illustrates a sequence of steps to re-compute an updated user state based on audio input acquired by one or more HIE microphones.

FIG. 4 is an example of updating the measured physiological condition of a user based on audio input acquired by a HIE.

FIG. 5 shows a process to re-compute an updated user state based on simultaneous data acquisition via multiple HIE modalities.

FIG. 6 illustrates a sequence of actuator steps for a HIE to perform a task in which actuator steps are modified by a user’s sensed state.

FIG. 7 is a simple example illustrating how the state of a user may be used to modify the actions of a HIE in which the awake versus sleep status of a user determines whether a notification action is performed.

FIG. 8 illustrates a timeline of HIE -human interactions and resultant re computations of an AIP when telecommunications support rapid updating (e.g., less than one second).

FIG. 9 illustrates a timeline of HIE -human interactions and resultant re computations of an AIP when telecommunications are slow or interrupted (e.g., greater than one second).

FIG. 10 is a flow chart of an exemplary sequence of steps to maintain an updated AIP when telecommunications are slow or interrupted (e.g., greater than one second).

FIG. 11 is an example of interconnections and layout of key elements supporting distance communication between two humans, augmented by human interaction entities (HIEs) instantiated with a shared artificial intelligence personality (AIP).

FIG. 12 shows some sensor and actuator components that may be included in one or more relatively simple, inexpensive human interaction accessories (HIAs) comprising an element of a human interaction entity (HIE).

FIG. 13 A illustrates a young child being alerted via a human interaction accessory (HIA) to a desire by a contact to connect with her. FIG. 13B shows a scene from a distance communication session, augmented by a shared artificial intelligence personality (AIP), between the young child illustrated in FIG.

13 A and her grandfather.

FIG. 14 is a flow chart of continuing activities supported by a shared artificial intelligence personality (AIP) when a participant engaged in distance communication redirects focus to another activity, causing the distance communication to be interrupted (for a defined period or indefinitely).

FIG. 15 is a flow chart of participant activities engaged in distance communication when a distant user provides one or more directives (i.e., a “to-do list”) that may be performed, aided by a shared artificial intelligence personality (AIP), while the distant user is disconnected.

FIG. 16A illustrates key elements that allow interactions between a user and a human interaction entity (HIE) and/or human interaction accessory (HIA) instantiated with an artificial intelligence personality (AIP) to be cast to a television.

FIG. 16B illustrates a scene within a home when distance communications augmented by a shared intelligence personality (AIP) are cast to a large screen so that engagement and interactions may be shared with family or friends.

FIG. 17A illustrates a scene in which a grandparent initiates a card game (that may be augmented by a shared artificial intelligence personality) via distance communication.

FIG. 17B shows a scene in which the child participating in the card game illustrated in FIG. 17A continues to play the game, augmented by the shared artificial personality (AIP) with or without continued connectivity by the grandparent.

FIG. 18A shows an example of an untethered human interaction accessory (HIA) that includes a display showing a cartoon-like character to facilitate interactions with the shared artificial intelligence personality (AIP).

FIG. 18B demonstrates the use of a tablet as a human interaction entity (HIE) instantiated with a shared AIP by one participant during a distance communication session while simultaneously viewing the use of a different HIE platform, a toy-like human interaction accessory (HIA) instantiated with the shared AIP, by the other participant (depicted in FIG. 18 A) during the interaction.

FIG. 19A shows a scene from a scenario in which the parents and friends of a child setup, using a shared artificial intelligence personality, conditional actions consisting of personalized audiovisual clips (from each parent or friend) to be displayed to the child whenever he/she points toward an image of an animal.

FIG. 19B is a follow-on scene from the scenario illustrated in FIG. 19A in which the previously established actions are performed upon meeting their associated conditions each time the child points toward an image of an animal.

FIG. 20 is a flowchart showing, for efficient and effective review, an example of sorting and clustering interactions with multiple humans by a shared artificial intelligence personality according to topic areas.

FIG. 21 illustrates a timeline of interactions involving a shared artificial intelligence personality in which interactions with two individuals are time-shifted, sorted and clustered for efficient review by a third individual who can then quickly formulate responses that may be sent to any or all members of the group.

FIG. 22 is a flowchart showing the accumulation of conditional actions from one or more individuals and processes to determine when those conditional actions are executed by a shared artificial intelligence personality.

FIG. 23 is a flowchart showing the accumulation of conditional actions and insertion of one or more alternative actions by an artificial intelligence personality when no conditions are met and a response is expected.

FIG. 24A is a timeline showing the beginning of a time-shifted game of simplified checkers facilitated by an artificial intelligence personality.

FIG. 24B is a timeline, continuing from FIG. 24A, showing the conclusion of a time-shifted game of simplified checkers facilitated by an artificial intelligence personality.

DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS

In exemplary embodiments, an artificial intelligence personality (AIP) is substantially shared among two or more human interaction entities (HIEs). HIEs facilitate a wide range of real time and non-real time communication experiences to support a sense of ongoing social connectedness. The system may be implemented using two or more HIEs (including HIAs) and, optionally, one or more remote and/or distributed processors to compute updated AIPs and/or sequence telecommunications. AIPs may be periodically updated based on human interactions sensed by the two or more HIEs, events external to the environment of the HIEs, or following pre-determined intervals. AIPs may also be updated based upon the availability of computational resources and/or some other form of direction provided by other parties such as technicians, psychologists, counselors, teachers, advisors, supervisors, other humans, and/or other AIPs.

The system may provide two or more users with intuitive machine companions that exhibit an integrated knowledge base and a personality cumulatively acquired from all, or a subset of, interactions, or simulated or virtual interactions with users. Experiences may reflect and motivate appropriate social interactions to evoke emotions and reinforce bonding or connectedness, and are regular reminders of the social pact that exist between the humans, even when separated by some distance. Since the shared AIP instantiated within EQEs “knows” (as allowed by each user) most or all activities by human members of a network, HIEs may decide what, how, when, and why to share information, experiences, emotions, or interactions with any member(s) of a group to amplify social interchange.

A vast array of social interactions sensed by one HIE may be shared with one or more other humans via a shared AIP instantiated within their HIEs (and HIAs). Some examples of “milestone” and other events that may trigger such sharing via one or more remote HIEs include: receiving a raise, a child taking a first step or losing a first tooth, acing a test, having a birthday, winning a ball game, eating something delicious, listening to a great song, watching a cool movie, capturing an awesome photograph, playing a practical joke, and so on. Such sharing, particularly when separated by long distances, may be a vital part of feeling “connected.”

Reactions and behaviors of the humans may be shared via curation and/or interpretation by a shared AIP instantiated within HIEs to engage in an educational interchange of information, demonstrations of learning, and/or advancement of thought regarding a topic area. In other words, the entities or the humans may establish, maintain, and/or swap roles as students versus teachers or mentors for remote educational experiences. An AIP may be taught by one or more humans and pass this new learning on to other humans via one or more coupled HIEs. Teaching may be formal or informal, including tricks, dances, games, songs, news, forms of affection, friendly mischievous activities, and forms of language.

In addition to interpreting, curating, and mediating interactions, a shared AIP may, at times, recognize needs and initiate actions (including facilitating the establishing of remote connections) to evoke certain behaviors, to assure engagement and shift emotions.

In other words, HIEs instantiated with a shared AIP may play the role of counselor and/or spiritual guide. Teaching of skills or knowledge to evoke emotion may be particularly well- suited to the social exchanges necessary to support an ICE (i.e., isolated, confined and extreme condition) human.

In further embodiments, a shared AIP may initiate an exchange with a human based on: 1) learning about an event during an interaction with another human sharing the AIP, 2) an assessment that a “critical mass” of multiple events worthy of being reported has transpired with another human sharing AIP exchanges, 3) an assessment that a “critical mass” of multiple events has transpired cumulatively among multiple humans sharing the AIP, 4) periodically, for example, at the same time each day or week, and/or 5) upon declaration of an important or emergency situation by one or more users, or the shared AIP.

HIEs (more precisely, the shared AIP instantiated within HIEs) may curate gameplay between remote humans. A HIE might observe a human in its local environment, making a move on a board and then display the move to a human associated with a remote HIE (in essentially real time, or “time-shifted” at some later time), invoking a response from the human for another move and communicating the move. As HIEs engage in supporting social connection, either of the HIEs may add their own commentary, or play a role in the gameplay. For instance, a HIE could chuckle when its human makes a move, reward the human with a song, or collaborate and guide its human in a supportive or sneaky manner.

HIEs (human interaction entities) may be 1) physical, 2) virtual, or 3) a combination of physical and virtual, particularly at different times or within different environments (e.g., a physical device when a user is seated and a related virtual avatar displayed on a mobile device when moving about). Physical HIEs may include robots (moveable or substantially stationary), robotic pets, robotic toys (e.g., dolls, teddy bears, baby-like figures, mechanical constructions), and human-like objects. Virtual HIEs may have features found in avatars, digital assistants, cartoon characters, or synthesized persons or pets that may be displayed on computer monitors, screen, laptops, mobile devices (phones, tablets, smart watches, etc.) or other display devices including augmented reality, mixed reality and virtual reality headwear. Virtual HIEs may also be displayed as holograms, holographic-like projections, light-field projections, and other techniques that make video objects appear 3 -dimensional.

In exemplary embodiments, HIEs may include environmental sensors (i.e., sensors measuring one or more elements within the environment of the HIE, and human interaction sensors (i.e., measuring interactions between the AIE and a user). Environmental sensors may include cameras (i.e., directed toward the environment of a user), light sensors, thermal sensors, motion sensors, accelerometers, global positioning system (GPS) transceivers, microphones, infrared (IR) sensors, galvanometric sensors, pressure sensors, switch sensors, magnetic sensors, proximity sensors, date and time clocks, Bluetooth transceivers, and Wi Fi transceivers. Environmental sensors may also include devices worn by a user (but directed at the environment) such as smart glasses (e.g., Google Glass), augmented reality headwear, earbuds (e.g., with a microphone), smart watches, and so on. Human interaction sensors may include cameras (i.e., directed toward the user), thermal sensors, motion sensors, accelerometers, microphones, infrared (IR) sensors, galvanometric sensors, heart rate sensors, electrocardiogram sensors, electrooculogram sensors, electroencephalogram sensors, pulse oximeters, pressure sensors, magnetic sensors, activity monitoring devises, computer mice, joysticks, keyboards, touch screens, and proximity sensors. Optionally, both environmental and human interaction cameras may include the ability to pan and zoom.

In cases when a HIE comprises a physical device (or a collection of interacting and/or coupled human interaction accessories, HIAs), the acceptability of human interactions may be enhanced by movements, gestures, information displays, pointing, sounds, and other forms of HIE/HIA output or interaction. Actuators or other output components may include one or more video display devices, hologram display devices, holographic-like projectors, speakers, propulsion systems, servos, motors, magnetic field controllers, orientation controllers, haptic controllers, light and other forms of pointing devices, switch controllers, actuators for appendage control, and controllable tactile surfaces.

In exemplary embodiments, HIE outputs may change, depending on time and circumstances. For example, a full range of HIE outputs may be utilized in a stationary environment while video-only exchanges are performed in a mobile setting and/or audio- only outputs may be produced, for example, while driving. Generally, more freedom of movement may be available using untethered HIEs and/or HIAs (i.e., not connected to any fixed wired power or wired telecommunications source). Tethered HIEs may allow for continuous interaction (with fewer concerns regarding power consumption, battery life, etc.) and/or more sophisticated interaction devices (e.g., holographic displays, projected images, etc.). In further exemplary embodiments, a combination approach may be employed, where wireless telecommunications are used during most interchanges during the daytime and the recharging of batteries and updating of shared AIP data are additionally performed when tethered (e.g., while asleep). For the purposes of the present application, an AIP (artificial intelligence personality) is an understanding construct that interacts with one or more humans in a human, cartoon, or pet-like manner. An AIP may include a background or database of: 1) information, 2) memories and/or 3) experiences. At the core of an AIP is an artificial intelligence that may be implemented by combinations of traditional programming, and forms of machine learning including convolution neural networks and/or other deep learning techniques. Neural networks may encompass large components of an artificial intelligence, such as control of actuators (of a HIE or HIA) to express a wide range of behaviors; or be segmented into sub-components where each subsystem performs specific tasks, such as emotion recognition, searching strategies to acquire new information, speech recognition, word translation, speech formation, facial feature recognition, gesture recognition, animation display, control of articulated movements, and so on.

In exemplary embodiments, AIP information may be: 1) embedded within machinery (e.g., software, firmware and/or neural networks), or 2) incorporated by including the capability of being able to search for information when needed using telecommunications such as searching the internet and/or so-called “cloud.” Some information may be specifically taught to an AIP, such as the birth dates of all users within a network as well as their relatives and acquaintances. Other information may be known and available more globally, accessed via internet search strategies that are known in the art. Searching and selecting information along with the ability to synthesize new information from such multiple sources greatly expands the “intelligence” component of an AIP.

AIP memories include those gathered using device sensors from the environment of one or more users. For example, the overall locations of a HIE (using, for example, GPS methods known in the art and/or forms of localization using object recognition to identify objects at known relative locations within video images) recorded over time and/or, for example, the time of day that a wake-up alarm frequently rings may become an AIP memory. An aspect of the AIP is an ability to store selected memories (e.g., as permitted by each user) from the environments of two or more users. Such a common database may allow more meaningful behavioral interactions to be enacted, enabling a shared AIP to more closely link the interactions of its users, even when one or more users are separated by geographic distance and/or availability.

In further embodiments, AIP memories and experiences include those acquired using human interaction sensors as a result of interactions with AIP users. Such interactions are often multi-modal in nature, involving inputs from a number of sensors (audio, visual, tactile, etc.), sensing over a wide range of scales (e.g., camera sensors the detect small movements of the eyes, larger movement of the head, or gross gestures observed when viewing the entire body), and over a wide range of time scales (from milliseconds to months). AIPs may be updated, partially or fully, based upon the availability of new information and/or computational resources; and/or other forms of direction provided by other parties such as technicians, psychologists, counselors, teachers, advisors, supervisors, other humans, and/or other AIPs. In other words, the AIP may be able to more effectively share social interactions among users as a result of a common database of memories and experiences acquired from extensive interactions with each user individually.

More specifically, interaction data may include:

1. data acquired from one or more of the environmental sensors,

2. data acquired from one or more of the human interaction sensors,

3. physical states of one or more humans within the vicinity of a HIE,

4. physiological states of one or more humans within the vicinity of a HIE,

5. cognitive states of one or more humans within the vicinity of a HIE,

6. emotional states of one or more humans within the vicinity of a HIE,

7. changes in the physical, physiological, cognitive, or emotional states of one or more humans within the vicinity of a HIE,

8. one or more spoken words within the vicinity of a HIE,

9. one or more recognized objects within images acquired by a HIE, and

10. one or more gestures performed by one or more humans within the vicinity of a HIE. In additional embodiments, AIP memories and experiences may also include those acquired using HIE/HIA sensors as a result of interactions with one or more other HIEs. As with HIE-human interactions, physical HIE-HIE interactions are typically multi-modal in nature. HIE-HIE interactions may be between or among HIEs that share an AIP, or interactions may be among HIEs that are instantiated with different AIPs. The ability to interact with AIP memories and experiences may be greatly enhanced by having HIEs project historical image, video and/or audio recordings of interactions.

In addition, HIE-HIE interactions may not necessarily involve physical interactions. In other words, HIE-HIE interactions, particularly involving different AIPs, may occur in a virtual space (and/or over an accelerated time-frame compared to physical interactions). HIEs may also interact autonomously with the humans associated with HIEs instantiated with distinct AIPs, or with humans (pets, or other entities) that are not associated with any HIE. Once such HIE interactions occur, consequences and/or results of the interactions may be conveyed by the HIEs to their human counterparts (i.e., users).

In exemplary embodiments, an AIP may be designed to substantially share a single AIP among two or more HIEs (including HIAs). Even with a substantially common AIP knowledge base, each user may be the recipient of a somewhat different AIP experience for a number of possible reasons:

1. During periods when there is a delay or interruption in communications associated with the distribution of updated AIPs, a “local” AIP may have recently acquired experiences that have not yet been transmitted to one or more “remote” AIPs.

2. An AIP for one or more individuals may be customized or tailored, for example, to accommodate special needs of an individual.

3. The sensed environment may require differing responses to the same interactions, for example to broadcast (audio) more loudly to accommodate a noisy environment.

4. While the AIP is a single, cumulative personality that is updated and adapts based upon new experiences, its interactions with users may vary based upon the specific circumstances, contexts, preferences, and variable performance of each user. For example, if an AIP is aware that a particular user is not interested in sports, then, although another user may be updated by the AIP with recent sports scores, recent sports scores may be offered only upon request. Along similar lines, an AIP may suggest accompanying a user while playing a musical instrument only if the AIP has been made aware that the user is capable of playing.

5. As noted above, not all sensed information may be allowed to be distributed among all users. For example, acquired knowledge of financial or medical records and measurements associated with any medical conditions (including undiagnosed, potential conditions) may be restricted to each individual user.

This may include actions where the sensitive knowledge may be inferred, such as during discussions or physical interactions that would point to such a sensitive condition. In exemplary embodiments, AIP interactions may be combined with archived data and sensed inputs to perform a substantially continuous assessment of human health and performance factors. Health assessments may include the monitoring of symptoms resulting from pre-existing conditions and/or the detection of new health issues.

Performance assessments may include: 1) behavioral health (psychological and physiological well-being), 2) cognitive performance (memory and attention), 3) psychosocial performance (psychological strategies and social exchange), 4) task performance (goal-directed activities), and/or 5) teamwork (coordination and goal achievement). Such substantially continuous assessments may be performed in a covert manner (i.e., measurements particularly of the timing and content of human responses during the normal course of AIP interactions) or overtly, for example, by periodically performing performance tests that may be in the form of games or quizzes.

Additionally, once determined, one or more AIPs may be used within efforts to help mitigate the effects of some aspects of degraded human health and/or performance. For example, mitigation may include countermeasures that reduce the effects of separation of HIEs from their family and friends. Such countermeasures may be incorporated within the instantiation of one or more AIPs and/or include guidance from human sources (e.g., support personnel, advisors, doctors and/or other medical professionals).

In further embodiments, substantially continuous assessment of human health and performance factors may be used to assess the effectiveness of different countermeasures and their timing. Assessing the effectiveness of countermeasures may include measures involving one or more AIPs as well as those initiated and maintained by human sources. In effect, the substantially continuous assessment of performance during (and following) the application of countermeasures provides a feedback loop that may be used to enhance the performance of AIPs to maintain human health and performance. Enhanced performance of AIPs to maintain health and performance levels further enhances confidence, and subsequently outcomes, by AIP users.

In further exemplary embodiments, both the maintenance of (particularly mental) health and treatments of health conditions may be augmented by tasks, problem solving interactions and goals that are presented or augmented by the AIP. Inspired HIEs will strive for mastery of tasks, problems and/or goals. Mastery is motivated by the intrinsic value of achieving, learning, and self-improvement over the course of an experience. Providing positive experiences and connections improves expectations for successful outcomes and increases positive mood that contribute to positive feelings of hope, pride and gratitude.

In further exemplary embodiments, a user interacting with one HIE may be a Young Loved One, or YLO, such as a grandchild, child, niece, nephew, godchild or other youth. A user interacting with another HIE may be an Older Loved One, or OLO, such as a grandparent, parent, aunt, uncle, godparent, or other older person. The AIP may be developed to support the goal of achieving or improving the feelings of connectedness between a single YLO and a single OLO, multiple YLOs and a single OLO, multiple OLOs and a single YLO, or multiple YLOs and OLOs. The interactions between the users and their respective HIEs may include game playing, storytelling, sharing confidences, instruction, creating things, sharing ideas, teaching and learning, bonding activities, and other exchanges motivating feelings of connectedness, personal value, significance, relevance, importance, uniqueness, mutual respect, empathy, understanding, caring, and compassion.

FIG. l is a flowchart chart illustrating an exemplary sequence of computational and/or machine learning steps to substantially share a single AIP among two or more HIEs. HIEi interacts with one or more users in the vicinity or environment of HIEi, and HIE2 interacts with one or more users in the vicinity or environment of HIE2. If an interaction occurs that includes HIEi 10a, then the results of this interaction (e.g., memories, experiences, new knowledge, skills) may be added to an interaction database 10b.

Similarly, if an interaction occurs that includes HIE2 11a, then the results of this interaction may be added to the shared interaction database 1 lb. Optionally, and more generally to include any number of users and HIEs, if interactions occur that include HIE N 12a, then the results of these interactions (e.g., memories, experiences, new knowledge, skills) may added to a shared interaction database 12b.

As described in more detail below, interactions may involve multimodal inputs (see, e.g., FIG. 5) that generate a wide range of new knowledge, skills, and inferences. For example, a sentence spoken to a HIE by an interacting human may contain new information within the content of the words, convey several different emotional components (e.g., happiness, excitement, interest) in the way the words were spoken, indicate various physiological conditions (e.g., the speaker is awake and active), and/or acknowledge a cognitive understanding of previous actions or words spoken by the HIE. All such knowledge, skills, and inferences are held in the interaction database until they are used to update an AIP and/or user state (with timing caveats as described below, particularly in FIG. 10).

In general, the updating of an AIP 15 with new interaction data 14 may be computationally expensive and involve remote (i.e., not in the vicinity of a HIE device or display) processing resources. Thus, a machine decision 13 whether to update an AIP may be based a number of factors including: 1) the number of recent interactions, 2) interaction(s) content (e.g., importance, expressed excitement, social relevance related to shared AIP users), 3) availability of processing resources, and 4) elapsed time since the most recent AIP or user state update.

Some AIP updates 15 may involve updating only a subset of the AIP resources (e.g., sub-divided neural networks, memories, classifiers associated with different sensors). Updates may also involve the updating of user states 15. User states include the physical, physiological, cognitive and emotional status of a user. User states are derived data and are not strictly a part of the personality of an AIP, but user state may be used to modify the responses and actions of an AIP (see FIG. 6). It is typically most convenient to update a user state 15 using the cumulative interaction database 14 at the same time as updating and distributing AIPs.

As described in more detail within descriptions associated with FIGS. 9 and 10 below, when dealing with situations in which there are significant delays in distributing AIPs, transmitted updates of AIPs must contain sufficient information to allow an updated AIP to be re-computed based on the transmitted data set and the prior (i.e., not yet updated) AIP from which the new AIP was computed. This may include all sensor data leading up to the computation of the new AIP, specific AIP (network) differences between the prior and newly computed AIP, the entire new AIP (expensive from a bandwidth perspective), or sufficient derived data (e.g., changes in physical, physiological, cognitive and/or emotional states; gestures; recognized objects; spoken words; analysed movements) to compute an updated AIP.

FIG. 2 illustrates the monitoring of a human user 20 using one or more cameras 21 that may be components of a HIE. The one or more cameras (optionally including distance sensing elements) may be fixtures within a robot or pet-like device, or cameras may, for example, be embedded within mobile devices such as mobile phones, tablets, or laptops. Images 22 acquired from the one or more cameras 21 may be processed by classifier networks 23 and/or other image processing techniques that are known in the art to extract identified objects (e.g., head, eyes, mouth, arms, neck, fingers, torso), movements of objects (particularly relative to each other), and conditions that are associated with the appearances and/or movements of objects within the sensed video field.

In exemplary embodiments, user conditions may be classified based on the appearance and movements of objects within video fields. In FIG. 2, for the purposes of describing the general processes to determine and monitor the overall state of a user, two steps (user conditions 24a, 24b and user state 25a, 25b) are illustrated as overall steps to determine or update a final user state from classifier 23 results. However, in some instances, in order to determine a state of a user, more than just the observed conditions must be taken into account. For example, the classification 23 within images 22 of a frown or squint may be an indication of uncertainty, inability to see an object, intense sunlight, frustration, intense focus, or stress. These may be determined as potential current user “conditions” 24b using image classification techniques. When taking into account a user activity, such as performing a particularly challenging calculation, a user state of “intense focus” (and perhaps “uncertainty”) may be determined from the possible user conditions. In some simpler cases (e.g., awake versus asleep), classifications may be unambiguous (e.g., not requiring context) and overall user state may be determined directly from classifiers (as illustrated in FIG. 5).

Both user conditions 24b and state 25b may be described in terms of categories that include: physical, physiological, cognitive and emotional components. Based on HIE observations and other measurements, current user conditions 24a and state 24b may be updated to new user conditions 24b and an updated user state 25b. As described more fully in descriptions of FIGS. 6 and 7, the state(s) of one or more users may modify the activities and exchanges of an AIP and state, particularly as a function of time, and may be a portion of an AIP’s memory; however, one or more users’ states are not a part of an AIP. The AIP has a substantially shared personality that is distinct from its users.

In further exemplary embodiments, FIG. 3 illustrates the monitoring of a human user 20 using one or more microphones 31 that may be components of a HIE In the case of HIE audio input 31 from a user 20, new information may be discerned from both the content of the audio and how the audio is spoken. Thus, audio signals may be processed by multiple pathways, for example, to extract the content of speech using speech recognition techniques 32 that are known in the art; and classification techniques 33b to extract emotional content from the audio signals. The speech content produced by the speech recognition classified 32 may be further processed 33a to extract meaning and other information that may be used to update the current condition of the user (i.e., condition 24a updated to condition 24b), and may also be used to update the memories and shared experiences of the AIP (not illustrated in FIG. 3). The new user conditions 24b may be used to update the current user state 25a to an updated user state 25b.

FIG. 4 extends concepts in FIG. 3 to demonstrate the processing and consequences of uttering a specific phrase: “I don’t feel well.” The phrase spoken by a user 20 is detected by one or more microphones 31, processed using speech recognition techniques 32, and classified for both speech content 33a and emotional content 33b. The results of these classification are used to update both the user state 40 and the AIP state 43.

The information content of the spoken phrase in particular may be used to update a physiological state of the user 40. More specifically, as a consequence of uttering the phrase, one or more factors within the intelligence that describes the user state are adjusted to instantiate a “sick” condition 42 from the degree of wellness versus sickness 41b that was previously know about the user. Wellness versus sickness 41b is one the physiological factors that includes asleep versus awake 41a, relaxed versus stressed 41c, and a very large number of others 41d that make up the physiological state of a user. On the other hand, knowledge that the user is sick, the degree of perceived sickness, the time period over which the sickness appears to be occurring, and any (i.e., many other elements of) other information about the sickness may be incorporated into the memories, experiences, skills, and knowledge base of the substantially shared AIP.

In further exemplary embodiments, FIG. 5 demonstrates the simultaneous acquisition of multimodal data from a number of sensors 51a, 51b, 51c, 5 Id, 51e. For example, expression data may be acquired from a user’s face 50a using one or more cameras directed at the face 51a. Vocalizations by the user 50b may be collected using one or more microphones 51b. Eye pointing or gaze directions 50c may be acquired based on video collected by one or more cameras directed at one or both eyes 51c. User intent may be determined based on the sensing of a user’s finger 50d on a touch-sensitive screen 5 Id.

More generally, data may be acquired from any number of sensors 50e including: 1) device sensors that measure one or more elements within the environment of the HIE and/or user, and 2) human interaction sensors that measure interactions with the one or more human users. Device sensors may include one or more cameras, light sensors, thermal sensors, motion sensors, accelerometers, global positioning system (GPS) transceivers, microphones, infrared (IR) sensors, galvanometric sensors, pressure sensors, switch sensors, magnetic sensors, proximity sensors, date and time clocks, Bluetooth transceivers, and Wi Fi transceivers. Human interaction sensors may include one or more cameras, thermal sensors, motion sensors, accelerometers, microphones, infrared (IR) sensors, galvanometric sensors, heart rate sensors, electrocardiogram sensors, electrooculogram sensors, electroencephalogram sensors, pulse oximeters, pressure sensors, magnetic sensors, computer mice, joysticks, keyboards, touch screens, and proximity sensors.

Each input modality may require differing degrees of pre-processing prior to being used as an input to a user state classifier 53 (possibly also involving one or more user conditions classifiers, see, e.g., description of FIG. 2). For example, object recognition 52a (of eyes, nose, mouth, chin, etc.) may be applied to data acquired by a camera 51a pointed at a face 50a. Speech recognition and or classification of emotional content 52b (see FIG.

3) may be applied to audio data. High spatial resolution eye tracking 52c may be applied to video images of one or both eyes. The contents of a touch-sensitive display 5 Id in the region being pointed to by a finger 50d may be determined 52d and passed on to the state classifier 53.

Simultaneously considering multimodal inputs from a constellation of sensors 51a, 51b, 51c, 5 Id, 51e enhances the robustness and accuracy of classification schemes 53.

Once new conditions and/or states are determined, they may be used to update to a new user state 54b based, in part, on a previous user state 54; and to contribute to the knowledge, skills, memories and experiences of the shared AIP (see FIG. 4).

In additional exemplary embodiments, FIG. 6 illustrates general steps demonstrating how an AIP 63 and user state 62 may be employed to structure the actions of a HIE 65. HIE actions 65 may be initiated spontaneously by the HIE or be the result of a “triggering event” 60 generated within the HIE as a result of acquired sensor data or resulting from data received via telecommunications. At appropriate times, HIEs may initiate 60 human interactions based on learned memories and experiences. Additionally, actions may be triggered 60, for example, to respond to a user’s voice or movements, or result from an elapsed time since a prior event. Actions may also be triggered 60 as a result of the arrival of data from outside the environment of the HIE and user 20 (e.g., an incoming message, an instrument or device requiring attention, change in weather).

Once triggered, an event engine 61 may be used to acquire any additional necessary inputs for a HIE action 65 to be performed. Once ready, data are conveyed to the AI “personality” 63 to structure one or more actions 65. For example, a HIE response to an incoming message may be to move closer to the user 20, display the message, and/or broadcast the message (or sounds derived from the message) vocally.

HIE actions 65 may be modulated by the user’s state 62 (or multiple states if multiple users are in the HIE environment). HIE actions are implemented using one or more actuators 64a, 64b, 64c, 64d that control various HIE output modalities. For example, HIE actions may involve movements 64a, vocalizations 64b, visual display 64c, and a wide variety of additional actions 64d. Responses by the user 20 to HIE actions may be used in a feedback mode to update 66 the user state 62 and the AIP 63, as illustrated in FIGS. 2 through 5.

In further exemplary embodiments, FIG. 7 illustrates a simple example of how one or more components of a user state may influence the action(s) of a HIE 65. In this example, an assessment by the HIE as to whether the user 20 is asleep versus awake 70 influences whether HIE actions 65 are performed. The asleep versus awake state is one of many components that make up a physiological state of a user 20. As described above, the physiological state is, in turn, one component of factors that make up the overall user state.

In this example, based on a HIE assessment 72 that the user 20 is awake, an event may be passed on the AIP engine 63 to initiate and execute one or more action(s) 65 using any of the various actuators 64. If the user 20 is not awake, then an additional screening 73 of the event 71 is performed to determine if the event is urgent (e.g., a wake-up alarm, important message). If so, then the event is passed onto the AIP engine 63 to enact HIE action(s) 65. If not, the event is placed in a buffer 74 to be enacted at a later time. In general, a vast number of components of the user state may be used to modulate the outputs of a HIE.

FIG. 8 illustrates exemplary timelines 89a, 89b of the updating of an AIP shared between two HIEs, HIEi interacting with useri (along timeline 89a) and HIE2 interacting with user2 (along timeline 89b). The locations of facial icons with double-ended arrows 80a, 80b, 80c, 80d, 83a, 83b, 83c, 83d along the timelines indicate the times of human-HIE interactions that result in updated AIPs. Transmissions of updated AIPs (and, if needed, user states and other data) are indicated by directed lines (e.g., 86a, 86b) between the timelines 89a, 89b where a downward-pointing line from the timeline associated with HIEi 89a toward the timeline associated with HIE2 89b (e.g., 86a) indicates a transmission from HIEi to HIE2. Conversely, an upward-pointing line from the timeline associated with HIE2 89b toward the timeline associated with HIEi 89a (e.g., 86b) indicates a transmission from HIE2 to HIEi. The slight shifts in these lines (e.g., 86a, 86b) represents transmission delays in the indicated direction. Times of receipt of transmissions are indicated by vertical dashed lines 81a, 81b, 81c, 8 Id, 84a, 84b, 84c, 84d. Such transmissions may involve remote (so- called “cloud”) and/or shared processing and transmission resources.

In the exemplary case of transmissions illustrated in FIG. 8, transmissions are rapid (e.g., less than one second). Thus, during the time of transmissions, it is unlikely that an update by a remote HIE occurs during the time it takes to initiate and transmit an update. This allows various (relatively simpler to code) handshaking schemes (e.g., so-called master-slave assignments or ready -to-send acknowledgements) to be implemented to avoid an update by a remote HIE during the time of transmission of an updated personality and/or user state. In cases where transmissions are consistently rapid, the temporary blocking of AIP updates by such schemes (e.g., for less than a second) may be generally not noticed by users. In FIG. 8, updates 80a, 80b, 80c, and 80d generated by HIEi are received by HIE2 at times indicated by 84a, 84b, 84c, and 84d; respectively. Conversely, updates 83a, 83b, 83c, and 83d generated by HIE2 are received by HIEi at times indicated by 81a, 81b, 81c, and 8 Id; respectively.

In additional exemplary embodiments, FIG. 9 illustrates the timelines of updates by HIEi 80a, 80b, 80c, 80d and HIE2 83a, 83b, 83c, 83d initiated at roughly the same times as those 80a, 80b, 80c, 80d, 83a, 83b, 83c, 83d illustrated in FIG. 8. In this example, significant transmission delays (e.g., 96a, 96b) are shown in both directions (i.e. from HIEi to HIE2 96a and from HIE2 to HIEi 96b). The delays illustrated in FIG. 9 are relatively constant over time and do not vary as a function of direction (i.e., transmitting or receiving). However, the same considerations (described below) may be made when (transmission or processing) delays vary (randomly or systemically) as a function of time or as a function of direction. Such delays may be a consequence, for example, of a low-bandwidth bottleneck in transmissions, a “noisy” transmission channel (requiring numerous re-transmissions of data packets), higher priority transmissions dominating the capacity of a limited-bandwidth transmission channel, or transmission times required to cover an extended distance between HIEs and/or processing resources.

When distances become very large, such as those encountered by astronauts, special (i.e., space-time) conditions must be considered when synchronizing the “time” of HIE timelines such as those illustrated in FIG. 9. When distances are short (and absent extremely large accelerations) timelines can be readily synchronized to a single (e.g., so- called “universal”) time. In most terrestrial settings, this can be a single clock such as “coordinated universal time” (abbreviated UTC) which also corresponds (in practice) to Greenwich Mean Time (GMT).

However, when distances are large and/or accelerations are extreme, such as those that may be encountered during space travel, then the synchronization of clocks (and therefore knowing the time a new AIP was computed) becomes more complex. In the case of large distances (absent accelerations) effects are described by Einstein’s theory of special relativity. Einstein’s general theory of relativity must be invoked to predict the effects of (gravitational) accelerations, including those of an orbiting spacecraft around a large body such as a planet or moon. In fact, given the orbital position of satellites that implement modern-day global positioning systems (GPSs), localization measurements would fail within about two minutes absent calculations of the effects of the satellites’ orbital paths around the earth due to special relativity (slowing by about seven microseconds/day) and general relativity (accelerating by about forty eight microseconds/day), with a net time shift on earth of about thirty eight microseconds/day.

The effects of both special and general relativity may be computed if trajectories (i.e., positions as a function of time) are known and/or predictable. This is the case (for the foreseeable future, and particularly absent any substantial effects of friction in space) for astronaut travel and stations within orbiting satellites, moons and planets. Thus, a modified time stamp may be computed that takes into account where the packets of information associated with an AIP (and, if needed, user state and other data) originate (i.e., relative to the location of a receiving station). In the description below, this is referred to as a “universal” corrected time (“UCT,” see FIGS. 9 and 10, intentionally distinguishing the acronym from its terrestrial analog, UTC).

The exact solutions for the corrections in clocks (i.e., time) for general trajectories are derived from a series of (tensor) equations similar to (in fact, historically derived from) Maxwell’s equations. For complex trajectories, corrected time may be computed using numerical integration techniques. For simple trajectories, relatively simple solutions have been derived and may be used under many conditions and/or as first approximations.

The shift in time due to special relativity may be estimated from t = T o /l - v 2 /c 2 where T is the corrected time in the reference frame the observer (i.e., the receiver of an AIP data set), T 0 is the time measured by the sender (i.e., transmitter), v is the velocity of the sender, and c is the speed of light in a vacuum, 299792.458 kilometers/second.

The shift in time due to special relativity may be estimated from where t 0 is the corrected time for a (slower ticking) observer (e.g., a terrestrial receiver of an AIP data set) within a gravitational field, //is the (faster ticking) time at distance r from the center of the large object generating the gravitational field (e.g., earth, moon, Mars), G is the gravitational constant, 6.674xl0 11 m 3 kg 1 s 2 , and Mis the mass of the object creating the gravitational field. The mass of the earth is approximately 5.972xl0 24 kilograms; the mass of the moon is approximately 7.347xl0 22 kilograms; and the mass of Mars is approximately 6.39xl0 23 kilograms.

In FIG. 9, a HIEi interaction 90a results in a local AIP update that is transmitted to HIE2, arriving at the time (i.e., UCT) indicated by 94a. Similarly, a HIE2 interaction 93a results in an AIP update that is transmitted to HIEi, arriving at the time indicated by 91a. In both of these cases, no additional AIP updates occurred during the times of the transmissions and both HIEs were updated with substantially the same AIP (and, if needed, user states and other data). The AIP was updated in the chronological order that HIE interactions occurred. In all cases described herein, an “interaction” may result from a single measurement by a HIE or a cluster of measurements (from seconds to hours) to make up an interaction that results in an AIP update (dependent, at least in part, on user preferences).

In the exemplary timeline shown in FIG. 9, a HIEi interaction 90b results in a local AIP update that is transmitted to HIE2, arriving at the time indicated by 94b. During this transmission time, a HIE2 interaction occurred 93b resulting in an AIP update that is transmitted to HIEi, arriving at the time indicated by 91b. If HIEi were to simply update its AIP based on the remote AIP data received at time 91b, then interaction 90b would be “missed” or “forgotten” since those data were not known to HIE2 when the AIP resulting from interaction 93b was computed and transmitted.

A solution to this, depicted more fully in FIG. 10, is to temporarily maintain all newly generated interaction data (and/or any other forms of new data) with time stamps until all sources have acknowledged that they are up-to-date compared with the earliest UCTs transmitted from all HIEs. When the possibility of “missed” data due to transmission delays exists, all AIP updates must include sufficient data to re-construct the update (e.g., interaction measurements or derivative data that triggered the update). When an update from a remote HIE is received in which one or more interactions are missing, an updated AIP is re-computed in which all (local and remote) interaction data sets are appended to the last known valid AIP in chronological order (see FIG. 10).

Optionally, the newly computed AIP may then be transmitted back to all other HIEs, keeping them up-to-date (not illustrated in FIG. 9). Whether the newly computed AIP is transmitted at this stage depends on the desire of users to remain as up-to-date as possible regarding all most recent interactions by other user(s). If not transmitted immediately, the newly computed AIP data may be transmitted during the next clocked event (i.e., “token” event, see below) or when a new local interaction occurs. In the case of data associated with HIE2 interaction 93b received by HIEi at time 91b, the next data set is transmitted following HIEi interaction 90c.

In FIG. 9, by the time data associated with HIEi interaction 90c are received by HIE2 at time 94c, two additional HIE2 interactions 93c, 93d have occurred. Thus, without a scheme to re-compute AIPs in the chronological order in which they occurred (or to computed substantially similar AIPs by at least ensuring that all interactions are included), both of these interactions 93c, 93d would be “missed” or “forgotten” within the data received by HIE2 at time 94c. Similarly, without such a scheme, data sets received by HIEi at times 91c and 91d would not contain information about HIEi interaction 90c.

Under some conditions, such as when a user leaves the vicinity of a HIE or goes to bed, no interactions may be generated for a substantial period of time and, consequently, no interaction data are transmitted. In cases where one or more other users remain active, this may result in a need for storing (e.g., for hours to days) substantial interaction data that may be required to re-compute AIPs, as just described above. In order to avoid this situation, HIEs may periodically transmit events triggered by a clock to ensure there are no periods of extended inactivity. This may be considered a token event (or so-called “token”) in which transmitted data contain, at a minimum, a time stamp. If desired, other housekeeping (i.e., non-interaction) data may also be included. When a token is received, a receiving HIE may be assured that the sending HIE has no other events to report up to the time of the token’s time stamp, allowing interaction data up to the time of the time stamp to be discarded.

In FIG. 9, HIE2 transmits token events at times 95a and 95b. The event transmitted by HIE2 at time 95a is received by HIEi at time 91e. At this time, HIEI may discard the data associated with HIEi interaction 90d, since it occurred prior to the time recorded within the HIE2 token time stamp 95a.

In further exemplary embodiments, FIG. 10 is a flow chart representing the overall steps that may maintain a substantially shared AIP involving two or more users when transmission times between HIEs (and any intermediary processors) are significant (e.g., greater than one second). FIG. 10 may be best understood in conjunction with the timing diagram illustrated in FIG. 9. When an AIP update arrives 100 from a remote processor, a determination is made whether the received UCT (corrected for any effects due to special or general relativity, as described above) is more recent than the UCT of the most recent local AIP update. If so, then the received AIP data may be used to update the local AIP 103.

If the received UCT is earlier than the last local update (i.e., the “no” state in 101), then an event has occurred during the time of transmission. In this case, the local AIP is re computed 102a based on the last valid AIP (prior to the remote event) updated by the new remote data and all more recent updates (that may include updates from remote HIEs if more than two HIEs are involved). A chronological order of updates to the AIP may be maintained. Optionally, the newly re-computed AIP data may be re-transmitted to remote HIE(s) 102b if user preferences dictate maintenance and interchange of all most recent updates possible (at the expense of transmission and computing resources).

Once re-computed, any AIP update data sets pertaining to updates prior to the received UCT (or the earliest received UCT if more than two HIEs are involved) may be discarded 104. Additionally, if a remote “token” (see description for FIG. 9) is received 105, then any AIP update data sets prior to the received token UCT (or earliest received UCT if more than two HIEs are involved) are discarded 104.

If a local HIE interaction triggers the calculation of a new AIP 107, then update data are sent to all remote devices 106. Otherwise, a determination is made whether sufficient time has elapsed since the last transmission 109, and, if so, then to transmit a “token” event (absent any AIP update data) to remote devices 108. In this way, all devices are eventually updated with all AIP interaction updates (in chronological order, if desired), even in the presence of significant transmission issues and/or significant propagation delays.

In exemplary embodiments, FIG. 11 shows key elements using a shared AIP to conduct and augment an exchange between two humans 110a, 110b. One user 110a interacts 111a with a HIE 112a that may or may not include the display of a cartoon-like character 116a facilitating interactions with the shared AIP. Similarly, a second user 110b interacts 111b with a HIE 112b that may or may not include the display of a cartoon-like character 116b facilitating the second user’s interactions with the shared AIP.

In FIG. 11, the tablet 115a and HIA 113a in proximity to a first user 110a, collectively form the HIE 112a associated with that user 110a. The one or more devices 115a and/or HIAs 113a that comprise the HIE 112a of the first user 110a may communicate 114a wirelessly, be tethered by cabling, or a combination of both. Similarly, the tablet 115b and HIA 113b in proximity to a second user 110b, collectively form the HIE 112b associated with that second user 110b. HIE components, including one or more HIAs, of the second user may also be tethered and/or communicate wirelessly 114b. HIEs associated with each user may be adapted for the particular desires (e.g., color, size, weight) and needs (e.g., visual and/or auditory accommodations, age-appropriate design) of the user as well as their environment (e.g., stationary versus mobile, quiet versus noisy, bright versus dim lighting, confined space, etc.).

Two or more HIEs 112a, 112b may communicate 117 with each other, generally over some distance involving components of the internet (e.g., TCP/IP) and/or other communications protocols (e.g., 4G, 5G, direct to satellite). Optionally (indicated by the dashed-line enclosure 119 in FIG. 11), telecommunications may include one or more remote processors (that may include multi-core processing, dedicated AI hardware and/or quantum computing capabilities) that may be used to train and/or update a shared AIP, as well as to sequence telecommunications and perform other computational housekeeping tasks. The computation of new and/or updated AIPs may be centralized (e.g., using dedicated processors 118), involve cloud computing using larger scale distributed processing (not shown), and/or be performed within HIEs 112a, 112b. The process of updating a shared AIP may comprise a range of computational approaches extending from large scale retraining of entire AI networks to, for example, relatively simple updating of an AIP memory. Computational approaches to update AIPS may use processing resources within an HIE and/or share resources among HIEs.

In further exemplary embodiments, FIG. 12 illustrates some of the components that may be included in an inexpensive, robust, light-weight HIA 120 to support human interactions with a shared AIP. The HIA 120 may, for example, be used to simply alert a local user 110a of a desire to connect with a distant user (see FIG. 13 A). The local user 110a may then continue to interact using the HIA 120 or switch to using other elements of a HIE (see, e.g., FIGS. 13A & 13B) to continue with the connection. A HIA 120 may be adapted specifically to accommodate the needs of, for example, a young child (e.g., a colorful toy where user responses may be indicated by simply shaking the HIA), a visually impaired individual (e.g., with mostly auditory indications and large buttons to sense user responses), the driver of a moving vehicle (e.g., with accommodations for vehicle attachments and adapting to different lighting conditions so as not to be distracting), the elderly (e.g., where the timing of interactions and anticipated responses may be adapted to suit the individual), and so on.

Examples of sensor and actuator components that may be incorporated in a HIA as shown in FIG. 12 include:

1. a contact switch 122a physically connected to a large push-button 121 on an accessible surface of the HIA 120,

2. an accelerometer 122c that may sense HIA motion and/or orientation relative to the earth’s gravitational field in 1, 2 or 3 dimensions,

3. a light source (or array of light sources) 122b such as light emitting diodes (LEDs) to alert a user and/or display information,

4. a camera 122d (shown including a lens assembly) to image the environment of the HIA,

5. a microphone 122f to detect speech and other sounds in the environment of the HIA, and

6. a speaker (or piezo buzzer) 122h that produces auditory stimuli within the HIA environment.

In addition to such sensor and actuator components, a HIA 120 may typically include a battery (single use or rechargeable) 122i to provide remote power for electronic components. HIAs also generally require some level of processing capabilities 122g. This can range from simple (low-power) controller circuitry such as a field-programmable gate array (FPGA), application-specific integrated circuit (ASIC) or microcontroller, to more sophisticated central processing units (CPUs). Communication between the HIA and other components of the HIE may be via a wireless transceiver 122e. Communications protocols may include Bluetooth (as shown in FIG. 12 122e), Wi-fi (including low-power Wi-fi) and/or other short-range telecommunications standards.

Interconnections between HIE elements 122a, 122b, 122c, 122d, 122e, 122f, 122g, 122h, 122i may be accomplished via a bus structure 123 implemented within a printed circuit board (PCB) or flexible circuit. Directional arrows to/from the bus 123 in FIG. 12 indicate the primary direction for the flow of information or power to/from some HIA elements 122a, 122b, 122c, 122d, 122f, 122h, 122i; although a lesser quantity of data may flow opposite the primary direction to/from some HIA elements (e.g., data are generally also sent to cameras to control camera characteristics including sensitivity, frame rate, etc.).

FIG. 13A demonstrates an example of an alert 132 to a local user 130, issued by the local user’s HIA 131, requesting a desire to connect by a distant user (see FIG. 13B). In this exemplary embodiment, the HIA is a portable device 131, transmitting wirelessly (not shown) to one or more processors relaying distance communication data. The alert 132 may, for example, be in the form of an emitted sound, vibration, and/or blinking light.

FIG. 13B expands upon the scenario depicted in FIG. 13 A. Once alerted by the small, inexpensive, toy-like HIA (131 in FIG. 13 A), the local user 130 switches to communicating with the distant user via a tablet device 133 where an image of the distant user 134 is displayed on the tablet screen 133. In this scenario, the local user is a young girl 130 communicating with her grandparent 134.

Also displayed on the tablet screen 133 is a cartoon-like character 135 representing a shared AIP as a virtual image. The AIP 135 may participate directly in the distance communication, remain silent while there is human-to-human exchange, or participate intermittently, for example, during game play, to look up information not known to either user, to remind a user of upcoming commitments, and the like. The AIP represented by the virtual image 135 may also monitor the exchange between the young girl 130 and her grandparent 134 to encode and store information (including from all environmental and interaction sensors within all HIEs involved in the exchange) for future reference and retrieval. The AIP represented by the virtual image 135 may also be called upon by a user (triggered, for example, using a shared AIP name or keyword) to perform tasks and/or participate in the exchange, particularly during brief periods when a distant user may need to perform a separate activity, not associated with the exchange.

FIG. 14 is a flowchart chart illustrating key computational steps to manage, using a shared AIP, a desire 141b by a distant participant 110b to disconnect, during a distance communication 142 session, from a local user 110a. The disconnection by the distant participant 110b may be performed 1) overtly, making it clear to the local participant 110a that the session is about to be interrupted; 2) covertly, in a manner that does not make it obvious to the local user 110a that a disconnection is about to happen; or 3) in a manner that utilizes the AIP 116a within the HIE of the local user 110a to be an informant to the local user 110a regarding the disconnection and/or subsequent activities. The desire to disconnect may be for a specified and/or anticipated period of time 141c, or indefinitely (e.g., at bedtime).

In the exemplary embodiment illustrated in FIG. 14, one or more predetermined activities to be performed subsequent to disconnection may be known to the shared AIP. These one or more activities may have been pre-determined by the local user 110a, generated by the distant participant 110b, or agreed upon by both participants 110a, 110b possibly including input from other individuals (e.g., parent, guardian, friend, counselor, employer, etc.). Examples of the one or more pre-determined activities include one or more of playing a game, participating in play with a toy, reading a story, watching a video, critiquing a movie, conversing, initiating a conversation with one or more other humans, performing a learning experience, helping to write a communication, drawing a picture, exercising, coding a program, paying a bill, planning an activity, reminding about upcoming events, building a virtual object, helping to construct a real object, and instructing to go to bed.

As illustrated in FIG. 14, activities 140a by the local user 110a during distance communications 142 with the distant participant 110b may (i.e., optionally) include interacting with a shared AIP via the HIE 116a associated with the local user 110a. Similarly, activities 141a by the distant participant 116b may include interactions with the shared AIP via the HIE 116b associated with the distant participant 110b.

Using his/her HIE, the distant participant may indicate a desire to interrupt distance communication 141b that is made known to the shared AIP and subsequently transmitted 140b to the HIE associated with the local user 110a. Upon interrupted distance communication, the local user 110a may continue to interact with the shared AIP 140c via his/her HIE 116a (i.e., absent the distant participant).

During this time for separate activities, the HIE (as described previously, more precisely, the AIP instantiated within the HIE) associated with the local user 110a may check to see if a request to reconnect 140d has been made by the distant participant 141g (i.e., via knowledge available to the shared AIP). If so, the two participants re-establish a distance communication 142. If not, then the HIE associated with the local user 110a may determine if an optional (indicated by dashed lines in FIG. 14) time for interruption/disconnection was specified by the distant participant 141c and, if so, whether that time has elapsed 140f. If an interruption time was provided and that time has elapsed 140f, then a request 140g is made to reconnect with the distant participant 141e. Otherwise, a further determination is made whether local user activities are finished 140h and, if not, then to resume those activities 140c. Otherwise, (i.e., activities are finished) the local user is free to move on to other ventures (with or without involving his/her HIE), such as determining whether it is bedtime 140i.

Once a distant participant has indicated a desire to interrupt 141b a distance communication 142, he/she may optionally (indicated by dashed lines in FIG. 14) specify an estimated time to be incommunicado or whether disconnection is to be for an indefinite period 141c. Once specified 141c, the distant participant 110b may move on to desired activities 141d (i.e., absent participation by the local user) that may, or may not involve his/her HIE 116b. During these activities, the shared AIP may check whether a request to reconnect has been made 141e by the local user. If so (and if desired by the distant participant), distance communication 142 may be re-established. If not, then the distant participant may determine if his/her remote activities have been completed 141f. If so (and if desired by the distant participant), a request may be made to reconnect 141g and (via knowledge available to the shared AIP) transmitted to the local user 140d.

In further exemplary embodiments, FIG. 15 is a flowchart chart illustrating key components of steps to manage, using a shared AIP, a request 141b by a distant participant 110b to disconnect from a distance communication 142 while providing one or more tasks or directives (i.e. a “to-do list”, 151b) to be performed by another user 110a while disconnected. The disconnection between users may be for the time required or expected to complete tasks, allowing the user who performed the tasks 110a to initiate reconnection once completed. Alternatively, the user performing tasks 110a may not be allowed (e.g., by the distant user 110b, due to a lack of availability of transmission facilities at the time, or for some other reason) to reconnect, even after the one or more tasks have been completed.

In further embodiments, reconnecting may be permitted and/or expected by the user performing the tasks 110a when some portion of tasks has been completed, whenever issues arise while performing the tasks, and/or periodically. Similarly, reconnecting may be permitted and/or expected by the distant user 110b at one or more pre-determined times, periodically, when the expired time appears excessive for completing tasks, and/or as his/her 110b schedule permits.

These one or more directives may have been pre-arranged, produced by the distant participant 110b, agreed upon by both participants 110a, 110b, and/or include input from other individuals (e.g., parent, guardian, friend, counselor, employer, lawyer, etc.). The one or more tasks or directives may include one or more of playing a game, participating in play with a toy, reading a story, watching a video, conversing, initiating conversation with one or more additional humans, participating in a learning experience, writing a communication, constructing a drawing, exercising, coding a program, paying a bill, reminding about upcoming events, building a virtual object, constructing a real object, and instructing to go to bed.

As illustrated in FIG. 15, activities 150a by the local user 110a engaged in distance communications 152 with the distant participant 110b may (i.e., optionally) include interacting with the shared AIP via the HIE 116a associated with the local user 110a. Similarly, activities 151a by the distant participant 116b may include interactions with the shared AIP via the HIE 116b associated with the distant participant 110b.

Using his/her HIE (or HIA) instantiated with the shared AIP, the distant participant may produce directives (i.e. “to-do” list, 151b) that are transmitted to the local user’s 110a HIE. Once sent, the distant user disconnects 151c from the interchange and, upon receipt of the to-do list 150b, the local user also disconnects 150c from distance communications. At this time, the distant user is free to conduct other activities 15 Id that may or may not include exchanges with the shared AIP. Similarly, the local user may begin work on assigned tasks (and/or other activities) 150d with or without exchanges with the shared AIP.

While performing tasks and/or other activities, the local user 110a may query whether the distant user 110b has made a request to reconnect 150e. If so, then the process to reconnect may be re-established 150a. Otherwise, a determination may be made by the local user 110a as to whether the “to-do list” is complete 150f. If not, then activities to complete assigned tasks continue 150d. Otherwise, a determination may be made by the local user 10a as to whether he/she is allowed to reconnect 150h with the distant user 110b. If so, a request to reconnect 150i is sent to the distant user’s HIE and activities are resumed 150d until the request to re-connect is acknowledged 150e by the distant user. If not allowed or if there is no need to reconnect, then the local user may proceed to other activities, such as playtime 150g.

Along similar lines, while performing other activities 15 Id, the distant user 110b may query whether the local user 110a has made a request to reconnect 15 le. If so, then the process to reconnect may be re-established 151a. Otherwise, a determination may be made whether to “check-in” 15 If on progress by the local user 110a in completing the “to-do list”. If no check-in is desired, then the distant user 110b may return to his/her activities 151 d that do not involve distance communication. If a check-in or wish to reconnect is desired, then a request to reconnect 15 lg is sent to the local user’s HIE and activities are resumed 15 Id until the request to re-connect is acknowledged 15 le by the local user.

Overall, this scheme allows a distant user 110b to assign tasks to another user 110a while efficiently and effectively reconnecting with the other user 110a only at times when needed or as desired. Directives or tasks assigned by a distant user may optionally include instructions regarding how tasks are to be completed and/or other information (e.g., rewards to be provided upon completion, who to contact if issues are encountered, deadlines, etc.). Thus, the scheme further allows one to “time-shift” directive and/or instructional exchanges, particularly those that involve specialized approaches, an extensive number of steps, difficult concepts, input from multiple sources, and the like.

FIG. 16A shows an example of a scene in which game play is initiated by a human (in this case, a grandparent) 134 and facilitated by an AIP represented within a video stream by a cartoon-like character 162a. The game consists of a child (130 in FIG. 16B, not shown in FIG. 16 A) selecting a favorite card from a selection of virtual cards 160a, 160b, 160c, 160d, 160e. The virtual cards 160a, 160b, 160c, 160d, 160e are superimposed onto the video stream directed to the child that also shows the grandparent and his environment 163.

FIG. 6B shows a scene that occurs later in the scenario depicted in FIG. 16 A. At this time, the child 130 continues to play the card game (where the backsides of the virtual cards 161a, 161b, 161c, 161d, 161e can be seen in FIG. 16B). The cards 161a, 161b, 161c, 161 d, 161e as well as a cartoon-like character 162b that facilitates interactions with a shared AIP are shown on a display directed at the child 130. In this example, the child 130 may continue game play while still connected with the grandparent, or game play performed by the shared AIP in which interactions are facilitated by the displayed cartoon-like character 162b may continue after the grandparent disconnects.

FIG. 17A is a schematic showing key components and connections of a user 110a interacting 11 la with an AIP represented by a cartoon-like character 171a displayed on a tablet device 170 that comprises a HIE. In this example, images projected on the HIE 170 are also cast 172 to a nearby television screen 173a. The process of casting may be performed wirelessly between the HIE and television, and/or involve direct communications by the television (or other display device) with image sources (i.e., via connected and/or remote streaming processors). The cartoon-like character representing the AIP within the HIE 71a may be viewed by the television screen 171b audience.

FIG. 17B illustrates a scene consistent with the setup shown in FIG. 17A in which interactions by one user 110a may be shared with family 174a, 174b, 174c, friends, colleagues, service-providers, or other individuals and/or groups by casting those interactions to a large-format display device 173b. The display includes the cartoon-like character 171c that is a projection of the shared AIP (shown as 171a in FIG. 17B and 171b in FIG. 17 A) including both audio and video content. More generally, FIGS. 17A and 17B illustrate that local interactions with an AIP via an HIE may simultaneously include exchanges with multiple humans.

In further embodiments, FIG. 18A illustrates a scene in which a girl 130 interacts with an AIP represented by an interactive cartoon-like character 181 via a HIA that includes a display 180a. The HIA may transmit wirelessly to other HIAs and/or nearby processors where the one or more HIAs and/or nearby processing devices collectively comprise a HIE.

FIG. 18B illustrates a scene in which a video stream of the girl 183 shown in FIG. 18A is displayed on the screen of a tablet device 182 being viewed by a parent or grandparent 134. Video images of the girl 183 show her continuing to interact with an AIP via a HIA 180b during the distance communication session. More generally, FIGS. 18A and 18B illustrate that, during distance communications augmented by a shared AIP, different users may (depending on personal preferences and/or device availability) simultaneously use different combinations of HIE and HIA devices during such exchanges.

In further embodiments, a HIE (again, more precisely, a shared AIP instantiated within a HIE) may be used to consolidate time-shifted exchanges with multiple individuals who share interactions with the AIP. For example, a distantly connected parent with more than one child may interact with an AIP during a single session, updating daily events by all children. Such updates may be as a result of single AIP interactions with a child, multiple interactions between the AIP and a child that occurred at different times, or a combination of single and multiple interactions by different children. A single interaction by the distant parent with the AIP monitoring the activities of all children in a single sitting effectively and efficiently “consolidates” (in addition to “time-shifting”) shared AIP exchanges involving more than two individuals. Any collective feedback or advice directed and distributed to each child individually, a subset of children, or all children may also be generated by the parent or guardian within such a single AIP interchange session. In addition to temporally consolidating information review based on interactions with multiple individuals, the AIP may sort and/or cluster the reporting of events based on classifications within one or more topic areas. For example, an AIP may organize reporting on sports activities by all children even though activities may have been reported to the AIP at different times and by different children. A subsequent topic may involve clustering or consolidating school activities of each child where child- AIP interactions that where the basis for reporting may have occurred at different times.

In further embodiments, organizing information with an AIP based on topic (versus organization, for example, by chronological order of when information was generated, or the identity of information sources) may save substantial time and effort (i.e., maintain one’s train of thought) during an AIP interaction. For example, responding to communications that are sorted by a shared AIP may alleviate the need to search all exchanges for information to examine and/or respond to a particular topic area. As described above, within the Background section, having all sources (or potential sources) of information on a topic simultaneously available greatly increases the ability within so-called “working memory” (with limited retention capacity) to consider and synthesize new information and/or formulate conclusions, particularly regarding complex issues.

Along similar lines, when reviewing and responding to an AlP-curated topic area, a response may be classified as being within a single topic, or touching upon multiple topic areas. The shared AIP may, for example, incorporate multiple topic classifications within an exchange so that the exchange may be brought up by the AIP the first time any of the topic areas arise. Additionally, individual exchanges may be directed at multiple (or a selected subset of) individuals. For example, providing instructional advice by a parent on how to perform an activity associated with a particular hobby may subsequently be brought up by a shared AIP in the context of either an “educational” experience or a “fun” activity (i.e., whichever topic arises first during a child’s review). In addition, the parent may direct the instructional advice to those children who are interested in the particular hobby.

In further exemplary embodiments, the primary sorting of multiple time-shifted exchanges according to topic area may additionally use secondary sorting strategies similar to those found in most spreadsheet-based applications. In other words, a primary sort may be based on any data structure or type such as topic area; however, those interactions classified to be within a particular topic (or other initial sorting classification) may subsequently be additionally sorted in chronological order, according to the participant(s) generating the initiating interaction, in accordance with a pre-defmed set of interaction content priorities, interaction content sorted alphabetically, interaction content sorted numerically, and so on.

Primary and secondary (and tertiary, etc.) sorting may be based on any classifiable data or data subsets, involve logical relations (e.g., “and”, “or”, “not”) and be generated and/or sorted in any order (e.g., ascending, descending). For example, a grandparent may wish to review all interactions that occurred on a particular day, secondarily sorted by topic area, presented in reverse chronological order, and only from teenaged children.

Such curation and/or organization by topic area and/or any other attribute(s) by the shared AIP may further enhance the efficiency and effectiveness of “time- shifted” exchanges. Examples where AIP consolidation and clustering of topics may lead to increased efficiency include interactions between an employer and a group of employees, a teacher and students, a political leader and members of his/her organization, an instructor and a group of hobbyists, a coach and a sports team, a lead strategist and members of a task force, and so on.

In further embodiments, a shared AIP may be utilized by an initiating user to convey one or more conditional responses, that define “conditional actions” to be performed by one or more target users, contingent on the meeting of one or more criteria (i.e., conditions) established by one or more initiating users. Conditions that trigger responses by the shared AIP may arise based on the state of a target user (e.g., whether perceived as happy or sad, whether the user is in a particular place), conditions in the environment of the user (e.g., music is playing, raining outdoors), and/or combinations of a wide range of overall conditions (time of day, the occurrence of a world event). Conditional responses may setup relatively simple scenarios where an action is triggered upon the meeting of a single condition, or one or more initiating users may employ combinations of conditional responses to establish any number of alternative scenarios (e.g., involving alternative audio or video clips) to be deployed as different conditions arise during AIP interactions by the one or more target users. Conditions may depend on environmental and user states during interactions with the initiating user, the one or more target users, and/or one or more other shared AIP users.

Conditional responses are particularly useful during time-shifted interactions, for example, due to communications delays or a lack of availability of a user who is a target of a conditional response (e.g., a target user is asleep or in transit). Conditional responses may be setup and sent by one or more initiating users to a single target user or broadcast to multiple target members of a group of shared AIP users. The criteria for conditional responses may include logic relationships (e.g., “and”, “or”, “not”) where an associated action may be enacted only if two or more criteria are met (i.e., logic “and”), an action may be enacted if any one of a number of criteria is met (logic “or”), an action may be enacted when a condition ceases to be present (logic “not”), or combinations of such logic operations.

As further examples, one or more conditions may be established that rely on identifying a verbal response, gesture (e.g., pointing, waving, etc.) or facial expression (e.g., determined to be happy, sad, confused, etc.) during any interaction by any shared AIP user. Similarly, conditions may depend on one or more users being at specified locations. Conditions may also depend on the specific contents of words or text generated during an interaction or, more broadly, one or more topics covered during an interaction. Along similar lines, conditions may depend on establishing a contact or connection with another individual, group or organization. An initiating user may even setup conditions for him/herself, effectively generating “reminders” and/or actions that are performed “automatically” (i.e., without further thought).

When condition (by any user) is met, an acknowledgement that the condition was satisfied, including the time it was satisfied, may be fed back to the initiating shared-AIP user. Further, if the action associated with the condition was performed, or was unable to be performed, an acknowledgement of performance, or indicators of reason(s) for not being perform, may be fed back to the initiating user. This scheme allows for tracking conditional actions that have been setup.

Conditional responses may be used within simple (e.g., so-called “if this, then that”) scenarios or within schemes that may involve a large number of potential conditions and/or outcomes. As an example of the later, consider the ability of a distant parent or grandparent to play a time-shifted game of “hide ‘n seek” with a child. Within a virtual scene (e.g., farm setting) a parent may record a large number of audiovisual responses to being discovered (e.g., by looking at or pointing) by the child during the game in any number of different hiding locations. For example, within one audiovisual response, the parent may imitate the sound of a cow while laughing when, during the subsequent playing of the game, the parent is discovered by the child at a location near a cow. Similarly, the parent may make a recording of him/her imitating the clucking sound of chickens to be broadcast when she/he is discovered near a chicken coop.

In additional embodiments, conditional responses may be sourced from more than one initiating user and/or directed at more than one target user. For example, in the time- shifted game of “hide ‘n seek” just described, multiple parents and friends may provide additional audiovisual responses to being discovered by the child in different hiding locations. Additionally, parents may anticipate more complex scenarios, for example, when no one is found during “hide ‘n seek” for a period of time, one or more hints may be provided as clues to any number of game participants. Parents may setup their conditional reactions (e.g., audiovisual clips) upon finding one or more children (e.g., participating in the same game simultaneously via the shared AIP) hidden or searching in different locations. The use of an array of more complex conditional responses may allow children to repeatedly play “hide ‘n seek” in a “time-shifted” fashion with their friends, parents, siblings and/or other members of their extended family without repeating a game sequence or outcome.

Aspects of distant schooling are another example where the use of conditional responses and resultant conditional actions may be effective both educationally and in promoting a sense of connectedness. A distant guardian or parent may record conditional responses directed toward a target child upon receiving various scores on a test. For example, a score of “A” may result in a “Hooray” action; while a “B” may result in “That’s good”; a “C” may result in “Let’s see what happened”; and a “D” may prompt “Let’s see how we can do better”. In this case, a so-called “1-of-N” selection can be setup as a series of conditional responses and associated actions where the conditions collectively cover a full range of possible test scores.

Distant schooling may augment traditional schooling or home schooling in personal, age-appropriate and meaningful ways. For example, when a child reaches a certain skill level in mathematics, a parent may setup a scenario in which the distance between the parent and child is determined. If the distance is expressed in miles, then the parent may ask to convert that distance into kilometers. If the child is unable to perform the conversion, then a conditional action may be enacted (e.g., a recorded audiovisual clip) in which the parent provides the factor to convert from miles to kilometers. Using previously encoded conditional responses, a parent may then ask how many seconds it takes for light to travel that distance (assuming a vacuum medium). A guardian or parent may setup any number of conditional responses and associated actions for any number of topics to “time-shift” learning, conversations and interactions.

Additional examples where conditional actions may be used in time-shifted, shared fashion using a shared AIP to perform connected activities include: reading a children’s story (e.g., before bedtime), engaging with interactive books, building (together) an actual or virtual machine or toy (e.g., Lego, erector set, etc.), performing an actual or virtual chemistry experiment, instructing how to operate a real or virtual device, setting up for someone’s special event (e.g., birthday, holiday), discussing past or upcoming purchases, planning a trip, anticipating a delivery, performing routine activities (e.g., brushing teeth, washing, etc.), commenting on a (previously viewed or while viewing) show or movie, describing the general activities of family or friends, and so on.

Such strategies, using conditional actions to time-shift interactions using a shared AIP, may be enacted by teachers and their students, employers and their employees, sports coaches and their players, government leaders and members of their organizations, bloggers and their followers, and so on. If responses are anticipated and generated for a sufficient number of conditions, a time-shifted and/or widely distributed set of responses may appear to be enacted in real time and personally connected.

A potential added benefit of setting up conditional actions in the form of audiovisual snippets (and other forms of exchange) and/or the clustering of information associates with time-shifted interactions is a reduction in overall bandwidth requirements compared, for example, to baud rates required for continuous video chat (even when video compression techniques are employed). As an example, during time-shifted game play, the total accumulated time of a series of conditional actions consisting of audiovisual snippets that are core to time-shifted play may be in the range of several minutes. Yet the actual time of game play by the recipient user may be in the range of hours. Compared with continuous online video exchange, the use (and even re-use) of audiovisual snippets may greatly reduce the total data transferred to perform the game or other time-shifted activities (e.g., instructional sequence, monitoring children’s activities, etc.). Time-shifting audiovisual snippets and other forms of conditional actions and/or the insertion of AlP-initiated actions may also permit the transmission of information related to distance communications to be deferred to non-peak times for data transmission or, for example, while a user is asleep or engaged in separate activities not occupying significant telecommunications resources. In further exemplary embodiments, a particularly useful application of a conditional response involves initiating a conditional action involving the shared AIP to be performed at a specified time (i.e., the condition is met when a specified time equates to real time). The specified time may involve a single data and time, an elapsed time (i.e., a difference in time relative to a reference time such as the current time), or be a so-called recurring time event, such as once every day, week, month, year, etc. This form of conditional response may be used to setup an event or action for a recipient user that may or may not further involve the initiating (e.g., distant) participant. Exemplary actions may involve setting up a telephone conversation or video conference at a specified time, a reminder for a child to go to bed, alerting (based on transit schedules) that departure must be imminent to catch public transit, and so on.

The shared AIP may also interface to so-called calendar applications, alarm clock applications, and/or other forms of notification or alerting software by transmitting the action (regarding, or at, the specified time) to one or more of these platforms. This allows a user to be alerted, not just buy a HIE (or HIA) associated with the shared AIP, but also by other devices (e.g., cell phones, alarm clocks, etc.) commonly used.

Utilizing a shared AIP, the process of time-shifting information exchange does not necessarily involve the formal generation of a message (e.g., in the form of text, audio, video, etc.). For example, while interacting generally (e.g., playing) with a shared AIP, a child may express a liking for a particular type of stuffed toy. Such interactions may not be directly coupled to the production of a message (formal or otherwise). Further, knowledge of the liking of the particular toy may not be directed toward any recipient (individually or collectively). However, at some later time, during one or more exchanges with a friend, parent or grandparent; the shared AIP may reveal that there was an expression of the liking of the particular stuffed toy by the child. Knowledge of this liking (again, without formal generation of any messaging) may result in ideas for gift purchase for the child.

In exemplary embodiments, FIG. 19A is an illustration of a scene 218a from a scenario in which the parents, grandparents and/or friends 210a, 214a, 215a of a young child (210b shown in FIG. 19B) setup, using a shared artificial intelligence personality via a cartoon-like HIE 21 la on a display device, personalized conditional actions that are enacted whenever the child points toward, or looks at, particular objects. In this example, the personalized actions consist of audiovisual displays (e.g., 213, 214, 215) from each parent, grandparent or friend 210a, 214a, 215a whenever the child points or looks toward particular animals.

Using the shared AIP (i.e., where exchanges are facilitated by a HIE 211a), each parent or friend associates themselves with one or more images of animals 212a, 212b,

212c, 212d, 212e. In FIG. 19A, a grandfather 210a sets up a personalized audiovisual snippet 213 to be displayed whenever a penguin 212e is viewed or pointed at by the young child. The audiovisual snippet 213 may, for example, consist of additional images and/or videos of penguins 213b, text about penguins 213c, and/or sounds made by different penguins 213d, all while displaying images and/or audio and video of the grandfather 213a who is the source of the audiovisual snippet.

Relations and friends may each setup additional audiovisual snippets or other actions (e.g., haptic outputs, holographic displays, confirmation messages that the action has been performed, etc.) associated with any number of other animals 212a, 212b, 212c, 212d, 212e (including, for example, viewing real animals) or other related objects. In FIG. 19A, a grandmother 214a has setup (at a time convenient to her) up an audiovisual display 214 whenever a turtle is viewed, and a mother 215a has setup (at yet another time convenient to her) an audiovisual display 215 whenever a chicken is viewed.

FIG. 19B shows a follow-on scene 218b from the scenario illustrated in FIG. 19A in which previously established actions are performed by a shared AIP upon meeting their associated conditions. In this case, shared AIP exchanges are facilitated by a cartoon-like character displayed on the screen 216a of a tablet device 216. Each time the child 210b views or points toward a particular animal on the tablet screen 216a, a personalized audiovisual snippet 217 may be displayed as generated by the relative or friend. At the time shown in FIG. 19B, the young child 210b is pointing toward a unicorn 219. The audiovisual snippet 217 setup by the grandfather (shown describing unicorns in the snippet 217a) includes images and/or videos 213b, text 213c, and/or sounds 213d, all about unicorns and related topics.

More generally, FIGS. 19A and 19B illustrate how conditional actions setup by any number of individuals at one or more times may be used to “time-shift” their interactions with other individuals. In some cases, it may not be apparent that actions are actually pre- established, making interactions appear “live”, further increasing feelings of connectedness. The conditions for associated actions may be personalized and/or sophisticated (e.g., conditions within conditions) to further such feelings. For example, attempted viewing of animals as shown above may produce alternate, instructional snippets (directions to go to bed) from a parent if the time is past bedtime for the child.

In further embodiments, FIG. 20 illustrates steps whereby “time-shifted” exchanges among any number of users (e.g., 220a, 220c) may be sorted and clustered into topic areas during review by a recipient user 220b. In this exemplary case, a user 220a (labelled “human 1” in FIG. 20) may interact during ongoing activities that include a first topic 222a utilizing his/her HIE (and/or HIA) 221a instantiated with a shared AIP. A second interaction with the shared AIP by the same user 222b may involve a second topic. A third interaction by the same user 222c may revert back to the first topic. Along similar lines, another user 220c (labelled “human N” in FIG. 20) may interact with the shared AIP (i.e., a fourth interaction) 223a concerning the first topic. A fifth interaction 223b with the shared AIP by the same user 220c may involve a third topic. A sixth interaction 223c by the same user 220c may revert back to the first topic.

All six interactions 222a, 222b, 222c, 223a, 223b, 223c are transmitted to all HTEs (including 221a, 221b, 221c) associated with all users (including 220a, 220b, 220c) that are instantiated with the shared AIP. During time-shifted review by another user 220b (labelled “human 2” in FIG. 20), the AIP may present interactions 222a, 222b, 222c, 223a, 223b,

223c sorted according to topic. The four interactions 222a, 222c, 223a, 223c involving topic 1 are clustered (represented as encompassed with a dashed line 224) and presented first 224a, 224b, 224c, 224d. In this case, as a secondary sort, the four interactions 224a, 224b, 224c, 224d associated with the first topic are presented in the chronological sequence they occurred. As a result of the sorting and clustering, the recipient or reviewing user 220b (i.e., “human 2” in FIG. 20) does not need to search for additional interactions regarding the first topic (i.e., knowing that the AIP identifies and clusters all interactions on a topic).

Once review of the first topic is complete, the reviewing user 220b may then progress to one or more interaction(s) involving a second topic 225 and subsequently to interaction(s) involving the third topic 226. Time and effort may be saved by sorting and clustering time- shifted exchanges based on topic and/or other criteria (e.g., information source, confidence score of the validity of interaction content, identity of others who viewed the content, number of others who viewed content and indicated a “like”, etc.).

In further exemplary embodiments, FIG. 21 shows simultaneous timelines 237a, 237b, 237c of shared AIP exchanges that demonstrate several additional benefits of time- shifted and/or curated AIP interactions. In this example, each individual 230a, 230b, 230c is able to interact at any time with corresponding HIEs and/or HIAs (231a, 23 lb, 231c respectively) instantiated with a shared AIP (i.e., installed with one or more processors such that the AIP interacts with the individual via one or more output devices and one or more sensors). In FIG. 21, arrows represent AIP -human interactions where the direction of the arrows represents the primary flow of key information (e.g., available for retrieval and presentation at some later time). Arrows filled with different patterns represent exchanges that involve different topics. Progression in time is shown from left-to-right for all three individuals 230a, 230b, 230c simultaneously.

In FIG. 21, the shared AIP is attuned particularly to optimize the time and effort spent by the individual 230b associated with interchanges represented within the middle timeline 237b. This individual 230b may, for example, be a busy parent or guardian interacting with two children 230a, 230c. Additionally, the parent 230b may be located distant from the children 230a, 230c, making real-time exchanges difficult. Interactions may result, for example, in one or more: text messages, emails, audiovisual clips, images, holograms, augmented or virtual reality displays, sound bites, songs, documents, electronic signatures, access codes, software controls and/or applications, spreadsheets, links to additional information, links to control physical devices (e.g., via the so-called internet-of- things), identified locations, contact information, calendar events, alerts, attached files, and other forms of information exchange that may be classified and curated by the shared AIP.

In the timelines 237a, 237b, 237c shown in FIG. 21, the first overall interactions 232a, 232b occur between the individual 230a shown in the upper timeline 237a and his/her HIE 231a, covering two topics (represented by a clear arrow 232a and a solid-fill arrow 232b). Two additional interactions 235a, 235b covering the same two topics (represented by a clear arrow 235a and a solid-fill arrow 235b) occur between the individual 230c shown in the lower timeline 237c and his/her HIE 231c. This is followed by two additional interactions 232c, 232d between the individual 230a shown in the upper timeline 237a and his/her HIE 231a, covering the topic represented by clear arrows.

Next, the individual 230b represented within the middle timeline 237b uses his/her shared AIP 231b to rapidly review 234a all available interactions 232a, 232b, 235a, 235b, 232c 232d, in this example, in the order in which they were generated (i.e., regardless of topic or who they came from). The interactions 232a, 232b, 235a, 235b, 232c 232d covered two topics represented by clear arrow 232a, 235a, 232c 232d and solid-fill arrows 232b, 235b. The individual 230b represented within the middle timeline 237b responds 234b to each topic separately using his/her HIE 3 lb instantiated with the shared AIP. Responses 233a, 233b, 236a, 236b to both exchange participants 230a, 230c are simultaneously made available via the shared AIP. Responses cover both topics raised including the topics represented by clear arrows (233a directed at recipient 230a, and 236a directed at recipient 320c) and by solid-fill arrows (233b directed at recipient 230a, and 236b directed at recipient 230c). Either or both responses 234b may be aided in their generation, in whole or in part, by the shared AIP.

In the timelines of shared AIP exchanges depicted in FIG. 21, the individual 230c represented in the lower timeline 237c is the first to review the two responses to the topics represented by clear arrows 236a and by filled arrows 236b. A bit later in time, the individual 230a represented in the upper timeline 237a reviews the same two responses (i.e., topics represented by clear arrows 233a and by filled arrows 233b). This same individual 230a then generates an exchange with his/her HIA 231a instantiated with the shared AIP involving a new topic represented by a dot-filled arrow 232e before generating two additional interactions regarding the topic represented by solid-filled arrows 232f, 232g. Sometime later, the individual 230c represented in the lower timeline 237c generates two more interactions 235c, 235d covering yet another topic area, represented by line-filled arrows.

Returning to the efficient and timely review of shared AIP interactions by the individual 230b represented within the middle timeline 237b, he/she quickly considers all interactions associated with the topic represented by solid-fill arrows 234c (secondarily sorted according to the time the interactions were generated or reviewed). This includes interactions about the topic that were previously reviewed 232b, 235b, responses that had been sent out 236b, 233b to recipients 230a, 230c, and newly generated interactions 233f, 232g since previously considering the topic area (within 234a). Quickly considering all interchanges on the topic area 234c and knowing that no other exchanges on the topic area are available, the shared AIP user 230b may form fully informed decisions or conclusions about the topic area. The individual 230b represented within the middle timeline 237b then quickly moves on to consider, this time in chronological order, interchanges classified within additional topic areas 234d represented by dot-filled arrows 232e and line-filled arrows 235c, 235d.

Summarizing the benefits of time-shifted interactions using a shared AIP illustrated within FIG. 21, an individual may receive and review time-shifted exchanges: within a condensed time scale, that have been sorted for review in any order including chronologically or by sender or topic, that result in responses that may be sent to multiple recipients simultaneously, that include all on-topic exchanges when considering a given topic (including those that may have been previously reviewed and reactions to previous considerations), that may be aided (in whole or in part) by responses generated by the shared AIP, and to ensure that all available interactions by a set of individuals are reviewed and, if desired, responded to.

In additional exemplary embodiments, FIG. 22 is a flowchart of steps for one or more shared AIP users 240a, 240c (i.e., initiating users) to formulate one or more conditional responses and associated actions to be performed by another shared AIP user 240b (i.e., a target user). In this example, one user 240a (labelled “human 1” in FIG. 22) interacts during ongoing activities 242a utilizing his/her HIE (and/or HIA) 241a instantiated with a shared AIP. During such interactions, this user 240 (i.e., an initiating user) may formulate a conditional response comprising a condition and an associated action 243a to be performed by the target user 240b (labelled “human 2” in FIG. 22).

In FIG. 22, the target user 240b may be performing ongoing activities 242b utilizing his/her HIE (and/or HIA) 241b. The shared AIP associated with the target user 240b is positioned to receive any conditions and associated actions 244 from all initiating users (e.g., 240a, 240c). Periodically, the shared AIP associated with the target user 240b checks to determine if any of the conditions (or any combinations of conditions) are met 245. If so, the action(s) associated with the condition(s) are performed 246 before returning to ongoing activities 242b.

More generally, as illustrated in FIG. 22, any initiating user 240c (labelled “human N” in FIG. 22) may interact during ongoing activities 242c with the shared AIP instantiated within his/her HIE (and/or HIA) 241c. During such activities, the user may formulate a conditional response comprising a condition and an associated action 242c to be performed by the target user (labelled “human 2” in FIG. 22). The condition(s) and associated actions(s) are incorporated within the shared AIP associated with the target user 213 who may then benefit from the collective of conditional actions 215 initiated from any number of users.

In further embodiments, FIG. 23 expands upon concepts presented in FIG. 22 by adding the notion that, absent any conditions 255a of conditional actions 255b being met, a shared AIP may be expected to provide a response during an interchange with a user. Shared AIP actions 255b may be generated as a result of meeting the conditions 255a of a conditional action 255b that may have been established by any other shared AIP user, or pre-established by the interacting user. However, if a situation arises in which a response is expected during an AIP interchange, then the AIP may, based on its cumulatively acquired personality and knowledge base, initiate an action 256b.

In FIG. 23, the upper portion 250 of the flowchart is a simple loop structure showing the establishment of one or more conditions and their associated actions 251c. An index (“i”) is initialized 251a and incremented 251b as a portion of the loop structure. During each iteration, a condition (“condition”) and associated action (“actioni”) are registered 251c within the memory of the shared AIP. Once all conditions are registered 25 Id, this contingent of “time-shifted” actions may occur (including more than once) at any later time, where the shift in timeline is represented by a vertical ellipsis 252 in FIG. 23.

At some later time, during ongoing activities 253, a user may interact with the shared AIP 254 when, during such interactions 254, software within the AIP may determine (e.g., using a loop structure 250 similar to the upper portion in FIG. 23) if any pre-defmed condition has been met 255a during the interaction. If so, then the associated action is performed 255b. Otherwise, the AIP assesses whether any sort of response or action might be expected 256a. If so, the AIP initiates and perform an action 256b based on its cumulatively acquired personality and knowledge base.

In additional embodiments, a shared AlP-initiated action may be an element of a conditional action (i.e., setup by a human). In FIG. 23, the additional AIP element may be an essential component or an adjunct to the performing of the associated action in step 255b. For example, a conditional action by a distant user might include reporting on the most recently available (e.g., after time-shift) weather forecast at a particular location. The conditional action may be established such that the weather forecast is inserted by the shared AIP as an element (i.e., dependent on both time and geographic location) of the overall conditional action.

Conditional actions may be further modified by the AIP based on a wide range of additional factors including, for example, milestones of a user (e.g., receipt of an award, birthday), time of day, the presence of other individuals in the environment of the user (e.g., when the privacy of information may be taken into consideration), personal preferences of the user, and so on. As a further example, following on with the weather forecast AIP action just described, temperatures may be converted by the AIP from Celsius to Fahrenheit based on personal preferences of the target user (i.e., known by the shared AIP).

Any action that is either initiated or modified by the shared AIP may be reviewed by one or more (human) shared AIP users. Reactions by the one or more users may include approval, disapproval, happiness, sadness, surprise, disbelief, fear, anger, excitement, anticipation, and/or vigilance. In some cases, when there is disapproval of an AIP initiated or modified action, steps may be taken to reverse the consequences of the disapproved action. For example, when playing a game, a move during the game may be “taken back” by re-establishing the game to a status just prior to the move. Along similar lines, if a shared AIP provided any piece of information that was later determined to be not true or misleading, then the AIP may be “corrected” by distributing (if desired, using the shared AIP itself) statements to rectify the false or misleading action(s).

In further embodiments, one variation of setting up conditional actions is the setting up of such AIP actions by a user directed at him or herself. For example, conditions involving the time of day may be used to setup reminders, calendar, and/or alarm clock notifications. Reminders or other actions directed back to a user may be initiated by the AIP based on wide range of pre-established conditions such as arriving at a specified geographic location, being in the presence of another person or a pet, being in the vicinity of a particular device, receiving a particular (i.e., specifically defined) interaction or any form of prompting from a distant user, the occurrence of a world event, a particular health condition (e.g., resting heart rate above 100 beats/minute), performing a specified activity, receiving a message concerning a particular topic, and so on.

Conditional actions may include revealing (by the shared AIP to the target user via a HIE) the source of the condition and associated action. Alternatively, conditional actions setup by one or more users as well as AlP-initiated actions during a shared AIP -user interaction, may be designed to help to establish (“covert”) scenarios in which it is not evident to a target user that some portions, or even all or most, of a time-shifted exchange involving a distant user is actually being initiated by the shared AIP.

Controlling knowledge of the source(s) of conditional actions may greatly enhance a sense of human connectedness. For example, when playing a game, many (or even all) of the moves in the game may arise from the expert knowledge (including actions derived using deep learning and other AI approaches) within the shared AIP. Commentary, encouragement and/or reactions to different playing situations (e.g., via audiovisual clips, messaging, etc.) may be the only actual (time-shifted) interchange with a distant user. The degree of difficulty and/or sophistication in AIP -initiated gaming actions may be predetermined, for example, by a distant user, parent, guardian, mentor, technician, or the shared AIP user him/herself. The AIP level of expertise may be adjusted to match that of the distant participant or the shared AIP user, a target play level to encourage learning, a level designed to allow a particular participant to win, a level that consumes up to a threshold in computational resources, a level that occupies up to a threshold in computational time, and so on.

Similar shared AIP scenarios in which the distinction between conditional actions established by a distant user and those originating from the AIP may be blurred or non- evident include those between teachers and students, repair technicians and their customers, lawyers and their clients, children and their extended family members, and so on. While teaching, AIP responses may be at a grade level of a student user. When playing a game involving answers to trivia, questions may be made age-appropriate. Discussions of world events or social media topics may be a blurred mix of shared AIP updates and remote human commentary.

An AIP level of sophistication may also be based on a fee-for-use structure where, for example, simple responses and/or those confined to a specific topic are provided at a reduced (or no) cost. The AIP level of sophistication (and, for example, computational resources required) may be elevated for the fee.

In another embodiment that illustrates several advantages of time-shifting activity using a shared AIP, FIGS. 24A and 24B show a time sequence in which a distant friend (e.g., parent, guardian, grandparent, confined person, hospitalized individual, astronaut, etc.) 260a plays a “time-shifted” board game with a child 260b. In this example, the board game is a simplified version of the game “checkers” where, in both FIGS. 24A and 24B, activities that initiate from the remote friend 260a are represented across top rows, from the shared AIP 261 across middle rows, and from the child 260b across bottom rows. Sequential steps (i.e., the progression of time) are represented from left-to-right in FIGS. 24A and 24B. In this game, the child is playing white tokens and the distant friend is playing dark tokens where, according to the traditional rules of checkers, the player with dark pieces moves first.

The distant friend 260a organizes his/her participation in the time-shifted game by setting up, with the shared AIP (via one or more HIEs and/or HIAs 261), one or more responses to anticipated board situations 262a, audiovisual clips to be played under various conditions 262b, messages that may, for example, include embedded images to be displayed upon encountering pre-defmed conditions 262c, and music clips that may include songs to be sung, for example, when it is anticipated that the end of the game is near 262d. Interactions 262a, 262b, 262c, 262d between the distant friend 260a and the shared AIP 261 are shown symbolically in FIG. 24A using arrows, representing the passing of information from the distant friend 260a to the shared AIP 261. Interactions between the friend 260a and the AIP 261, absent direct communication with the child, may occur over one or more times periods (of any duration) that are convenient to the remote friend 260a.

At some later time (continuing the temporal sequence from left-to-right where the time-shift is represented by an ellipsis 263a), absent any direct communication with the distant friend, the child 260b begins time-shifted game play by conveying a response 265a to the AIP 261 to the initial (dark token) move already setup by the friend 260a. Child 260b interactions 265a, 265b, 265c, 265d, 265e, 265f, 265g with the shared AIP 266 are symbolically illustrated by arrows in FIGS. 24 A and 24B where arrow direction represents the overall flow of information between the child 260c and the AIP 261.

A welcoming audiovisual clip 264a, (i.e., a conditional action) from the distant (and now absent direct communication) friend, is displayed by the shared AIP 261 to the child 260b. Such conditional actions 264a, 264b, 264c, 264d, 264e, 264f, 264g, 264h that may be enacted by the shared AIP 261 are shown symbolically in FIGS. 24 A and 24B using double arrows, indicating the prior transfer of information from the friend 260a to the shared AIP 261 and, at some later time when a condition associated with the action is met, from the shared AIP 261 to the child 260b (i.e., during the “time-shifted” game play).

The initial move by the child 260b was anticipated by the (again, absent from direct communication) distant friend 260a. Thus, this move 264b (in the form of a conditional action) is conveyed via the AIP 266 to the child 260b. The child 260b then responds with his/her move 265b and the friend conveys a pre-established message including an imbedded image 264c that is responded to by the child 260b with an audiovisual snippet 265c.

Another move 264d, resulting from the game board situation that was anticipated by the distant friend 260a, is subsequently conveyed to the child 260b.

The next game move 265d by the child 260b was not anticipated by any conditional setup. This forces the shared AIP 261 to interject a response move 266a. A pre-determined skill level (e.g., appropriate for the child 260b), target skill level for the child, matching the skill level of the distant friend, processing time, and/or availability of computing resources may individually or collectively play a role in determining the competitiveness of such game moves 266a, 266b by the shared AIP 261. The child 260b may not be aware that the source of the move 266a was the shared AIP 261 (versus the distant friend 260a). The next move 265e by the child 260b also results in a game situation in which no pre-established conditions for actions are met, forcing the shared AIP 261 to be the source of another gaming move 266b. At this time, the child takes an extended break from game play, indicated by another ellipsis 263b.

The break allows sufficient time for updated game play to be conveyed to the distant friend (via the shared AIP 261) as illustrated in the follow-on timeline in FIG. 24B. At a convenient time, the distant friend 260a re-engages with the shared AIP 261 (i.e., absent any direct communication with the child 260b). The distant friend is updated with the audiovisual snippet 267a, previously created 265c by the child 260b and transferred via the shared AIP 261. The shared AIP 261 also updates the distant friend on the status of game play by selecting and/or sorting key (i.e., including most recent) game moves. The cluster of moves is displayed collectively (indicated as enclosed by a dashed rectangle) 267b by the shared AIP 261 for efficient (i.e., summarizing and time-saving) review by the distant friend 260a.

Continuing the right-to-left progressive timeline in FIG. 24B, the distant friend 260a decides to override an AlP-sourced game move by re-setting the game board 262e and recording an audiovisual clip, explaining the override 262f. Seeing the state of the game, the friend also sets up a conditional star-burst display 262g using the shared AIP in anticipation of losing the game. The distant friend then moves on to other activities, dis engaging from interactions with the shared AIP, at least on topics related to game play with the child 260b. The break in time 263c allows for shared AIP interactions to be updated within all HIEs and/or HIAs, including the specifics of the time-shifted game play.

At some later time 263c, absent direct communication with the distant friend 260a, the child 260b resumes play by viewing the game move that was overridden 264e by the distant friend 260a and the accompanying audiovisual clip 264f that was previously generated 262f by the friend 260a. The child 260b responds with his/her new move 265f whereupon, seeing the end of the game as near, the musical snippet 264g previously recorded 262d by the friend 260a along with one last move by the AIP 266c (on behalf of the distant friend) is conveyed to the child 260b. The more recently recorded (by the friend) starburst display is shown 264h to the child 260b while he/she makes a final move 265g. Recognizing that the child has won (i.e., black playing pieces are no longer able to move forward), the AIP 261 congratulates 266d the child 260b and conveys results to the friend 260a (to be received at a later date, represented by a dashed arrow 268).

FIGS. 24 A and 24B exemplify some key features of time-shifted interactions facilitated by a shared AIP. At no time during the game were the child and distant friend in direct communication with each other. Both were free to interact in time-shifted game play whenever and wherever (and using whatever devices) they wished. The AIP facilitated time-shifted exchange by: 1) performing actions upon encountering pre-arranged conditions, 2) sorting and clustering responses for efficient review, 3) stepping in to perform independent actions when needed, and 4) allowing (hopefully rarely) its actions to be overridden upon subsequent human review.

The foregoing disclosure of the exemplary embodiments has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many variations and modifications of the embodiments described herein will be apparent to one of ordinary skill in the art in light of the above disclosure. It will be appreciated that the various components and features described with the particular embodiments may be added, deleted, and/or substituted with the other embodiments, depending upon the intended use of the embodiments.

Further, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims.

While the invention is susceptible to various modifications, and alternative forms, specific examples thereof have been shown in the drawings and are herein described in detail. It should be understood that the invention is not to be limited to the particular forms or methods disclosed, but to the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the appended claims.