Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR GENERATING WRAP UP INFORMATION
Document Type and Number:
WIPO Patent Application WO/2023/137175
Kind Code:
A1
Abstract:
A system for generating wrap-up information is capable of learning how interactions are transformed into contact notes and outcome codes using natural language processing and can generate the contact notes and outcome codes for new incoming interactions by applying prediction models trained on interaction data, contact notes and outcome codes. The system for generating wrap-up information receives interaction data, including interaction audio data, interaction transcripts, associated contact notes and associated outcome codes. The interaction transcripts are generated from the previous interactions between agents and customers. The contact notes and outcome codes are generated by agents during the associated previous interactions. The system processes and uses the interaction data to train prediction models to analyze interaction audio data and interaction transcripts and predict appropriate contact notes and outcome codes for the interaction. Once trained the prediction model(s) can generate appropriate contact notes and outcome codes for new interactions.

Inventors:
ANDERSON GRANT (GB)
MACKIE SCOTT (GB)
ROBERTSON SEAN (US)
EADES NEIL (GB)
Application Number:
PCT/US2023/010795
Publication Date:
July 20, 2023
Filing Date:
January 13, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VERINT AMERICAS INC (US)
International Classes:
G10L15/26; G06F16/34; G06F40/30; G06F40/35; G06F40/56; G06Q30/016
Domestic Patent References:
WO2021074798A12021-04-22
Other References:
ANKIT PATIL ET AL: "Using Natural Language Processing to Understand Reasons and Motivators Behind Customer Calls in Financial Domain", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 18 October 2021 (2021-10-18), XP091077686
CHARTE FRANCISCO ET AL: "Resampling Multilabel Datasets by Decoupling Highly Imbalanced Labels", 29 May 2015, ADVANCES IN VISUAL COMPUTING : 16TH INTERNATIONAL SYMPOSIUM, ISVC 2021, VIRTUAL EVENT, OCTOBER 4-6, 2021, PROCEEDINGS, PART II; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 489 - 501, ISBN: 978-3-030-90436-4, ISSN: 0302-9743, XP047654608
MICHA{\L} KOZIARSKI: "CSMOUTE: Combined Synthetic Oversampling and Undersampling Technique for Imbalanced Data Classification", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 April 2020 (2020-04-07), XP081639317
Attorney, Agent or Firm:
SCHERER, Christopher (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A method for generating wrap up information, comprising: receiving historical interaction data from an interaction database; generating training data from the historical interaction data including generating training transcripts from historical recordings in the historical interaction data; building a vocabulary of words in the training transcripts and converting the vocabulary to integer IDs; generating vector embeddings for the vocabulary; converting the training transcripts and training contact notes associated with each of the training data to integer sequences based on the integer IDs for the vocabulary; converting training outcome codes associated with each of the training transcripts to a binarized label; training a predictive note model with the vector embeddings, the integer sequences for the training transcripts, and the integer sequences for the training contact notes; training a predictive code model with the integer sequences for the training transcripts, the vector embeddings, and the binarized labels for the training outcome codes; passing an integer sequence for a communication transcript to the predictive code model and the predictive note model; generating one or more outcome codes for the communication transcript using the predictive code model; and

44 generating a contact note for the communication transcript using the predictive note model.

2. The method of claim 1 , wherein generating vector embeddings for the vocabulary is performed by a language preprocessor, wherein the language preprocessor is a transformer deep learning model.

3. The method of claim 1 , further comprising determining if the training outcome codes are imbalanced by comparing the distribution of the training outcome codes to a predetermined balance threshold.

4. The method of claim 3, further comprising resampling the training outcome codes from the historical interaction data with at least one resampling algorithm when the training outcome codes are imbalanced.

5. The method of claim 4, wherein the at least one resampling algorithm is one of a Label Powerset Random Undersampling (LP-RUS) algorithm, a Label Powerset Random Oversampling (LP-ROS) algorithm, a Multilabel Random Undersampling (ML-RUS) algorithm, or a Multilabel Random Oversampling (ML-ROS) algorithm.

6. The method of claim 1 , wherein the historical interaction data is a data structure including fields for the historical recording, a historical contact note, and one or more

45 historical outcome codes, wherein the historical outcome codes are converted to training outcome codes and the historical contact notes are converted to training contact notes.

7. The method of claim 1 , wherein the predictive code model is a deep learning model or a statistical model.

8. A system for generating wrap up information, comprising: a memory comprising computer readable instructions; a processor configured to read the computer readable instructions that when executed causes the system to: receive historical interaction data from an interaction database; generate training data from the historical interaction data including generating training transcripts from historical recordings in the historical interaction data; build a vocabulary of words in the training transcripts and converting the vocabulary to integer IDs; generate vector embeddings for the vocabulary; convert the training transcripts and training contact notes associated with each of the training data to integer sequences based on the integer IDs for the vocabulary; convert training outcome codes associated with each of the training transcripts to a binarized label;

46 train a predictive note model with the vector embeddings, the integer sequences for the training transcripts, and the integer sequences for the training contact notes; train a predictive code model with the integer sequences for the training transcripts, the vector embeddings, and the binarized labels for the training outcome codes; pass an integer sequence for a communication transcript to the predictive code model and the predictive note model; generate one or more outcome codes for the communication transcript using the predictive code model; and generate a contact note for the communication transcript using the predictive note model.

9. The system of claim 8, wherein generating vector embeddings for the vocabulary is performed by a language preprocessor, wherein the language preprocessor is a transformer deep learning model.

10. The system of claim 8, further including computer readable instructions that further cause the system to determine if the training outcome codes are imbalanced by comparing the distribution of the training outcome codes to a predetermined balance threshold.

1 1. The system of claim 10, further including computer readable instructions that further cause the system to resample the training outcome codes from the historical interaction data with at least one resampling algorithm when the training outcome codes are imbalanced.

12. The system of claim 11 , wherein the at least one resampling algorithm is one of a Label Powerset Random Undersampling (LP-RUS) algorithm, a Label Powerset Random Oversampling (LP-ROS) algorithm, a Multilabel Random Undersampling (ML-RUS) algorithm, or a Multilabel Random Oversampling (ML-ROS) algorithm.

13. The system of claim 8, wherein the historical interaction data is a data structure including fields for the historical recording, a historical contact note, and one or more historical outcome codes, wherein the historical outcome codes are converted to training outcome codes and the historical contact notes are converted to training contact notes.

14. The system of claim 8, wherein the predictive code model is a deep learning model or a statistical model.

15. A non-transitory computer readable medium comprising computer readable code to generate wrap up information on a system that when executed by a processor, causes the system to: receive historical interaction data from an interaction database; generate training data from the historical interaction data including generating training transcripts from historical recordings in the historical interaction data; build a vocabulary of words in the training transcripts and converting the vocabulary to integer IDs; generate vector embeddings for the vocabulary; convert the training transcripts and training contact notes associated with each of the training data to integer sequences based on the integer IDs for the vocabulary; convert training outcome codes associated with each of the training transcripts to a binarized label; train a predictive note model with the vector embeddings, the integer sequences for the training transcripts, and the integer sequences for the training contact notes; train a predictive code model with the integer sequences for the training transcripts, the vector embeddings, and the binarized labels for the training outcome codes; pass an integer sequence for a communication transcript to the predictive code model and the predictive note model; generate one or more outcome codes for the communication transcript using the predictive code model; and generate a contact note for the communication transcript using the predictive note model.

16. The non-transitory computer readable medium of claim 15, wherein generating vector embeddings for the vocabulary is performed by a language preprocessor, wherein the language preprocessor is a transformer deep learning model.

49

17. The non-transitory computer readable medium of claim 15, further causing the system to determine if the training outcome codes are imbalanced by comparing the distribution of the training outcome codes to a predetermined balance threshold.

18. The non-transitory computer readable medium of claim 17, further causing the system to resample the training outcome codes from the historical interaction data with at least one resampling algorithm when the training outcome codes are imbalanced.

19. The non-transitory computer readable medium of claim 18, wherein the at least one resampling algorithm is one of a Label Powerset Random Undersampling (LP-RUS) algorithm, a Label Powerset Random Oversampling (LP-ROS) algorithm, a Multilabel Random Undersampling (ML-RUS) algorithm, or a Multilabel Random Oversampling (ML- ROS) algorithm.

20. The non-transitory computer readable medium of claim 15, wherein the predictive code model is a deep learning model or a statistical model.

50

Description:
SYSTEM AND METHOD FOR GENERATING WRAP UP INFORMATION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001 ] The present application claims priority of U.S. Provisional Patent Application No. 63/299,583, filed January 14, 2022, the content of which is incorporated herein by reference in its entirety.

FIELD

[0002] The present disclosure is directed to generating wrap up information for a customer interaction session in a customer engagement system.

BACKGROUND

[0003] In a customer engagement center (CEC), interactions take place between agents and customers looking for help with various issues. The agents may take some form of action on the customer’s behalf to assist with resolving their issue. Many CECs find it beneficial to keep a record of the outcomes of the interactions and generate notes and data pertaining to the interaction to ensure a clear understanding of the interaction and the outcomes at a later time.

[0004] In conventional approaches, CEC systems rely on agents to manually develop the notes and data pertaining to the interaction (for example, the agent manually assigns outcome codes and manually writes wrap-up notes for the interaction). The data generated for the system may include categorization details for the interaction pertaining to the type of interaction, the actions taken by the agent as a result of the interaction, etc. Further, the agent will typically be required to compose notes for the interaction, which is often time consuming and detracts from the agent’s ability to assist with other customers. Errors and inconsistencies in the contact notes and categorization due to agent entry may limit their later utility.

SUMMARY

[0005] A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by a data processing apparatus, cause the apparatus to perform the actions.

[0006] In one general aspect, according to certain embodiments a method is disclosed that includes receiving historical interaction data from an interaction database. The method includes generating training data from the historical interaction data including generating training transcripts from historical recordings in the historical interaction data. The method includes building a vocabulary of words in the training transcripts and converting the vocabulary to integer IDs. The method includes generating vector embeddings for the vocabulary. The method includes converting the training transcripts and training contact notes associated with each of the training data to integer sequences based on the integer IDs for the vocabulary. The method includes converting training outcome codes associated with each of the training transcripts to a binarized label. The method includes training a predictive note model with the vector embeddings, the integer sequences for the training transcripts, and the integer sequences for the training contact notes. The method includes training a predictive code model with the integer sequences for the training transcripts, the vector embeddings, and the binarized labels for the training outcome codes. The method includes passing an integer sequence for a communication transcript to the predictive code model and the predictive note model. The method includes generating one or more outcome codes for the communication transcript using the predictive code model. The method includes generating a contact note for the communication transcript using the predictive note model.

[0007] Other embodiments provide processing systems configured to perform the aforementioned methods as well as those described herein; non-transitory computer- readable media comprising instructions that, when executed by one or more processors of a processing system, cause the processing system to perform the aforementioned methods as well as those described herein; a computer program product embodied on a computer readable storage medium comprising code for performing the aforementioned methods as well as those further described herein; and a processing system comprising means for performing the aforementioned methods as well as those further described herein.

[0008] The objects and advantages will appear more fully from the following detailed description made in conjunction with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWING(S)

[0009] Figure 1 A depicts an example of a system for auto-generating wrap up information for an interaction, according to certain embodiments. [0010] Figure 1 B depicts an example of a preprocessing component used in the wrap-up system, according to certain embodiments.

[0011 ] Figures 2A and 2B depict a flowchart of an example of a method for autogenerating wrap-up notes for an interaction, according to certain embodiments.

[0012] Figures 3A and 3B depict a flowchart of an example of a method for autogenerating outcome codes for an interaction, according to certain embodiments.

[0013] Figures 4A and 4B depict a flowchart of an example of a method for autogenerating wrap up information for an interaction, according to certain embodiments.

[0014] Figure 5 depicts an example diagram of a computer system that may be utilized to implement the auto-generation of wrap-up information for an interaction in accordance with the disclosure, according to certain embodiments.

[0015] It should be understood that for clarity, not every part is necessarily labeled in every drawing. Lack of labeling should not be interpreted as a lack of disclosure.

DETAILED DESCRIPTION

[0016] In the present description, certain terms have been used for brevity, clearness, and understanding. No unnecessary limitations are to be applied therefrom beyond the requirement of the prior art because such terms are used for descriptive purposes only and are intended to be broadly construed. The different systems and methods described herein may be used alone or in combination with other systems and methods. Dimensions and materials identified in the drawings and applications are by way of example only and are not intended to limit the scope of the claimed invention. Any other dimensions and materials not consistent with the purpose of the present application can also be used. Various equivalents, alternatives, and modifications are possible within the scope of the appended claims. Each limitation in the appended claims is intended to invoke interpretation under 35 U.S.C. §1 12, sixth paragraph, only if the terms “means for” or “step for” are explicitly recited in the respective limitation.

[0017] There is an unmet need in the art for a system capable of automatedly predicting outcome codes (also referred to as labels) for labeling customer service interactions between an agent and a customer using machine learning. There is a further unmet need in the art for a system capable of analyzing the interactions between the agent and the customer, determining contact notes, training a prediction model on the determined contact notes to predict contact notes for future interactions, and continuously or intermittently improving the predicted contact notes for incoming interactions.

[0018] A system for generating wrap-up information is capable of learning how interactions are transformed into contact notes and outcome codes using natural language processing and can generate the contact notes and outcome codes for new incoming interactions by applying prediction models trained on interaction data, contact notes and outcome codes. Accordingly, the present disclosure overcomes the problems with conventional approaches of manually generating contact notes and outcome codes by teaching the system with the method to learn the link between transcribed interactions, outcome codes and wrap-up notes, and subsequently suggest appropriate outcome codes and wrap-up notes for new interactions.

[0019] The system for generating wrap-up information receives interaction data, including interaction audio data, interaction transcripts, associated contact notes and associated outcome codes. The interaction transcripts are generated from the previous interactions between agents and customers. The contact notes and outcome codes are generated by agents during the associated previous interactions. The system processes and uses the interaction data to train prediction models to analyze interaction audio data and interaction transcripts and predict appropriate contact notes and outcome codes for the interaction. Once trained the prediction model(s) can generate appropriate contact notes and outcome codes for new interactions. Contact notes may include data such as, but not limited to, a summarization of the interaction, sentiment data for the parties in the interaction (e.g., voice tone and attitude), or a description of subsequent actions to be taken. Outcome codes include predetermined tags or labels that categorize the interaction. By way of non-limiting example, an interaction in which a contact utters obscenities and threats may have the wrap-up note “Call ended due to obscene language and threats of physical harm” and the outcome codes “Call ended by agent,” “Call included threatening language,” and “Call included obscene language.”

[0020] Figure 1 A depicts an example of a wrap-up system 100 for autogenerating wrap up information for a customer interaction, according to certain embodiments. Figure 1 B depicts an example of an interaction processing component 120 used in the wrap-up system 100, according to certain embodiments.

[0021 ] In an embodiment, the wrap-up system 100 may be part of a customer service center system (not shown) or may be a separate component integrated with the customer service center system or any other company system that stores interaction data from interactions between customers and customer service agents. The wrap-up system 100 interacts with an interaction database 102 to receive historical interaction data 104 to train predictive note model(s) and predictive code model(s). The wrap-up system 100 includes an interaction processing component 120 to generate training interaction data which includes training transcripts 1 10, training contact notes 1 1 1 , and training outcome codes 1 12, a predictive note model component 170 to train and update predictive note model(s) for generating contact notes for new interactions, a predictive code model component 180 to train and update predictive code model(s) for generating outcome codes for new interactions, and a wrap-up assignment component 195 for presenting generated outcome codes and contact notes to users for approval. Each of these components will be described in greater detail below. Users may interact with the wrap- up system 100. The wrap-up system 100 optionally includes one or more user devices 160 useable by users for interaction with the wrap-up system 100 and for viewing generated contact notes and outcome codes for new interactions. In an embodiment, the wrap-up system 100 may be a processor(s) or a combination of a processing system and a storage system with a software component and optional storage.

[0022] In an example as described in further detail herein, historical interaction data 104 may comprise a plurality of historical recordings 105 of customer service interactions, such as the text of web chats or messaging, or audio data from telephone calls. The historical interaction data 104 may also include a plurality of historical contact notes 106 and a plurality of historical outcome codes 107 generated by agents and associated with the historical recordings 105. The customer service interactions, whether text- or audio-based, may occur and be recorded in parallel such that multiple customer service interactions are ongoing at any one time, such as at a CEC or multiple independent and decentralized agents acting at the same time. In an embodiment, each historical interaction data 104 includes a historical recording 105 of the interaction, one or more historical contact note 106 for the interaction, and one or more historical outcome code 107 for the interaction. In an embodiment, each historical interaction data 104 may be represented as a data structure including fields for all associated information and data associated with the historical interaction data. In an embodiment, each historical interaction data 104 may be represented as an object, including attributes for all associated information and data associated with the historical interaction data. It should be understood that these are merely examples and that any appropriate structure for associating the data for the historical interaction data 104 may be used.

[0023] In an embodiment, the historical recordings 105 are the recorded interactions between customers and customer service agents. The historical recordings 105 may be in the form of audio data or textual data depending on the format of the interaction. In an embodiment, the historical contact notes 106 are textual data associated with the historical recordings 105 that summarize or describe the interaction and/or summarize or describe actions to be taken based on the interaction. In an embodiment, the historical outcome codes 107 are tags or labels categorizing the interaction and/or categorizing actions to be taken based on the interaction. It should be understood that while historical interaction data 104 is described as including several types of data (e.g., historical recordings 105, historical contact notes 106, and historical outcome codes) that each historical interaction data and its data parts are associated with a specific interaction and remain associated with the specific interaction and each other throughout processing. It should be further understood that this applies to later described training data including training transcripts 1 10, training contact notes 1 1 1 , and training outcome codes 112. [0024] The wrap-up system 100 includes an interaction processing component 120 to receive historical interaction data 104 from the interaction database 102. The interaction processing component 120 performs multiple preprocessing procedures (described in further detail herein) on the historical interaction data 104 using various components to create training interaction data. The training interaction data includes a plurality of training transcripts 1 10, a plurality of training contact notes 1 1 1 , and a plurality of training outcome codes 1 12. In an embodiment, the interaction processing component 120 receives all historical interaction data 104 available. In an embodiment, the wrap-up system 100 and/or a user may request and/or query the interaction database 102 for the desired historical interaction data 104. The request and/or query may be based on a number of factors, including, but not limited to, time periods, system outcomes, previous outcomes, deleted outcomes, etc.

[0025] The interaction processing component 120 may also process communication interaction data, specifically communication recordings 113 and communication transcripts 1 14, to allow the system to generate contact notes and outcome codes for the communication interaction data. In an embodiment, communication interaction data is data from interactions between customers and customer service agents for which the wrap-up system will generate predicted contact notes and outcome codes. In a non-limiting example, the communication interaction data may include new interactions that do not have associated outcome codes or contact notes. In another non-limiting example, the communication interaction data may include historical interactions or other interactions that already have associated contact notes and outcome codes and the predicted contact notes and outcome codes may be used to text the accuracy of the training of the predictive note model(s) and the predictive code model(s).

[0026] The interaction processing component 120 extracts training interaction data from the historical interaction data 104 using a transcription component 121. The transcription component 121 may perform segmentation of the historical recordings 105, followed by a large vocabulary continuous speech recognition (LVCSR) transcription to create the training transcripts 1 10. In certain embodiments, the transcription component 121 may segment the historical recordings 105 into a series of utterances, which are segments of audio data that are likely to be speech separated by segments of audio data that are likely to be non-speech segments such as silence or non-speech noise that exceed a threshold amount of time, for example, 3 seconds. In certain embodiments, the LVCSR may be specialized for words, phrases, and terms in a specific industry, technical, or scientific field, specific to a language or a dialect, or expected in the historical interaction data 104 to ensure that the proper terms and interpretations are prioritized. The transcription component 121 may also extract communication transcripts 114 from communication recordings 113 using the above process.

[0027] The interaction processing component 120 may normalize the historical recordings 105, historical contact notes 106, and training transcripts 1 10 using a normalizing component 122. The normalizing component 122 removes punctuation and markups to render the training transcripts 1 10 and training contact notes 1 1 1 in a standard format. In one embodiment, the normalization includes: removing markups, removing system messages, converting to lowercase, removing participant names, expanding contractions, removing punctuation and whitespace, removing stop words (from historical recordings 105 and training transcripts 1 10), setting an individual maximum length for both training transcripts 1 10 and training contact notes 11 1 based on the maximum length of existing training transcripts 1 10 and training contact notes 1 1 1 , and appending special indicators to training contact notes 1 1 1 indicating the start and end of each training contact note 1 11. The transcription component 121 may also normalize communication transcripts 114 using the above process.

[0028] The interaction processing component 120 may utilize a division component 123 to split off testing interaction data 1 17 from the training data, including a subset of the training transcripts 1 10, training contact notes 1 11 , and training outcome codes 1 12 from the training interaction data. The division component 123 may select the testing interaction data 1 17 randomly, or from a predetermined percentage or amount of training data including the training transcripts 1 10, training contact notes 11 1 , and training outcome codes 1 12. The testing interaction data 1 17 may be used to test trained models with “new” interaction data that has contact notes and outcome codes already associated with the “new” interaction data. In embodiments involving splitting off outcome codes where each sample may have more than one label, keeping the same outcome code distribution for the testing data and the training data is desired. In those embodiments, the division component 123 may use a splitting algorithm intended to maintain the distribution throughout the data sets. One such appropriate algorithm is the iterative_train_test_split function from scikit-multilearn. This is merely one example algorithm and should not be considered limiting. In embodiments without multi-label samples traditional split algorithms may be used. [0029] The interaction processing component 120 may utilize a clustering component 124 to assign new training outcome codes 1 12 to the historical recordings 105. Clustering the historical data will identify historical outcome codes 107 and examine performance across the historical outcome codes 107. In certain embodiments, the clustering component 124 utilizes a k-means clustering algorithm that may be an unsupervised, semi-supervised, or supervised k-means technique. Other embodiments may use other unsupervised, semi-supervised, or supervised algorithms such as, but not limited to, density-based spatial clustering of applications with noise (DBSCAN) or hierarchical clustering. In embodiments that do not use clustering, historical outcome codes 107 will simply be used as training outcome codes 1 12.

[0030] The interaction processing component 120 may utilize a vocabulary component 125 to build a vocabulary 126 from both the training transcripts 1 10 and the training contact notes 1 1 1. The vocabulary component 125 builds the vocabulary 126 to cover the words used in the training transcripts 1 10 and training contact notes 1 11. In certain embodiments, the vocabulary component 125 is a tokenizer. One such applicable tokenizer uses Tensorflow and Keras. It should be understood that this is merely an example of a tokenizer and should not be considered limiting. In an embodiment, the vocabulary 126 has a maximum size which may be determined by the tokenizer used or may be user determined.

[0031 ] The interaction processing component 120 may utilize an integer conversion component 127 to convert the training transcripts 1 10, and the training contact notes 1 1 1 to integer sequences 128. Each word in the vocabulary 126 is assigned an integer ID 129 using the vocabulary component 125. The integer conversion component 127 uses the integer IDs 129 to convert the text of both the training transcripts 1 10 and the training contact notes 1 1 1 to integer sequences 128, sequences which can be processed by a model. The integer conversion component 127 may also convert communication transcripts 1 14 using the above process. In an embodiment, the integer conversion component 127 may use the tokenizer described above for the integer conversion. Further, it should be understood that the integer conversion may occur in conjunction with the vocabulary 126 being built.

[0032] The interaction processing component 120 may utilize a binarizing component 130 to convert the training outcome codes 112 (for model training) to binarized labels 131 , binary representations which can be processed by a model. In certain embodiments, the binarizing component 130 is a multilabel binarizer. One such applicable multilabel binarizer is MultiLableBinarizer from scikit-learn. It should be understood that this is merely an example of a multilabel binarizer and should not be considered limiting.

[0033] The interaction processing component 120 may utilize a parameter tuning component 132 to find the best set of parameters from the training interaction data. A different model may have different parameters, which should be tuned to provide optimum model training. The parameter tuning component 132 runs a parameter tuning algorithm to select a target set of parameters 133 and data from the training transcripts 1 10, training contact notes 1 1 1 , and training outcome codes 112. As indicated, different models may be optimized based on different parameters, in embodiments, the parameter tuning algorithm for the predictive code model may be the Hyperband parameter tuning algorithm provided in the kerastuner package. The parameters for the Hyperband parameter tuning algorithm may be set by the user.

[0034] The interaction processing component 120 may utilize an imbalance component 134 to determine if the training data is imbalanced. Based on the distribution of the historical outcome codes 107, the imbalance component 134 calculates a mean imbalance ratio, an imbalance ratio per outcome code, and a coefficient of variation of the imbalance ratio per outcome code. If these values are above a specific threshold, the interaction processing component 120 subjects the historical outcome codes 107 to resampling to balance the distribution of outcome codes.

[0035] The interaction processing component 120 may utilize a resampling component 135 to resample imbalanced data. The resampling component 135 utilizes at least one resampling algorithm to create equality across various sets of historical outcome codes 107. In various embodiments, the resampling algorithm may be a Label Powerset Random Undersampling (LP-RUS) algorithm, a Label Powerset Random Oversampling (LP-ROS) algorithm, a Multilabel Random Undersampling (ML-RUS) algorithm, a Multilabel Random Oversampling (ML-ROS) algorithm, or any combination thereof. The RUS and ROS algorithms resample by considering the historical outcome codes 107 of the dataset, with RUS algorithms eliminating instances that appear more frequently in the outcome code sets and ROS algorithms replicating instances that appear less frequently in the outcome code sets. The resampled historical outcome codes 107 generate resampled data 137, which may then be passed for further processing as training outcome codes 1 12. [0036] The interaction processing component 120 may utilize a message component 138 to generate a message list 139. The message component 138 creates the message list 139 from multiple portions of the communication transcript 1 14. The message list 139 is utilized by the predictive code model to generate the outcome codes.

[0037] The interaction processing component 120 may utilize a language preprocessor 150 to encode the vocabulary 126. The language preprocessor 150 transforms the vocabulary 126 into a plurality of vector embeddings 151. In an embodiment, the language preprocessor 150 transformer deep learning model including, but not limited to autoencoding language models, autoregressive language models, encoder-decoder language models. In an embodiment, the language preprocessor 150 is one of Generative Pre-trained Transformer 2 (GPT-2), Generative Pre-trained Transformer 3 (GPT-3), Bidirectional Encoder Representations from Transformers (BERT), XLNet, ELECTRA, ALBERT, DistilBERT, and RoBERTa. In another embodiment, the language preprocessor 150 may be BERT-as-service, run either locally or remotely. In another embodiment, the language preprocessor 150 may be BERT-as- service run on a graphics processing unit (GPU) capable of running a TensorFlow open- source software library.

[0038] It should be understood that the encoding of the domain language of the CEC system is optional and that a pre-trained language processing component 150 may be used to generate the vector embeddings. However, the fine-tuning of a pre-trained language processing component 150 on the specific domain will result in a more accurate prediction of contact notes and applicable outcome codes. In embodiments using a pre- trained language processing component 150, the vocabulary 126 does not need to be encoded prior to predicting outcome codes and contact notes.

[0039] The wrap-up system 100 further includes a predictive note model component 170, which receives the integer sequences 128 and vector embeddings 151 . The predictive note model component 170 receives integer sequences 128 for training interaction date including training transcripts 1 10 and training contact notes 1 1 1 to train a predictive note model to predict appropriate contact notes for interactions. Once trained with training interaction data, the predicted note model component receives integer sequences 128 for transcripts from interactions to generate contact notes for the interactions. In an embodiment, the predictive code model component 180 receives integer sequences 128 for communication transcripts 1 14 for predicting contact notes. The trained predictive note model can generate contact notes for a communication transcript 1 14. In one embodiment, the predictive code model component 180 receives integer sequences 128 from transcripts for testing interaction data 1 17 to predicting contact notes. The trained predictive note models can generate contact notes for the transcripts of the testing interaction data 117.

[0040] In one embodiment, the predictive note model is a sequence-to-sequence model which may include encoder-decoder models with attention. In various embodiments, the sequence-to-sequence model may include unsupervised models (e.g., n-gram embeddings, skip-thought vectors, Word Mover’s Embedding, SBERT) and supervised models (e.g., Generative Pre-trained Transformer, Deep Semantic Similarity Model, Universal Sentence Encoder). In an embodiment, the sequence-to-sequence model may be a BERT text summarization with pre-trained encoder (also known as a pre- trained BERTSUM or PreSumm). It should be understood that dependent on the model used some training parameters may be set by the user, such as, but not limited to the maximum number of epochs, the performance metrics to monitor, and the number of epochs to wait before stopping where no performance improvements are made.

[0041 ] The wrap-up system 100 further includes a predictive code model component 180, which receives the integer sequences 128, vector embeddings 151 , and binarized labels 131. The predictive code model component 180 receives integer sequences 128 for training interaction data including training transcripts 1 10 and training outcome codes 1 12 to train a predictive code model to predict outcome codes for interactions. Once trained the predictive code model component 180 receives integer sequences 128 for transcripts from interactions to generate outcome codes for the interactions. In an embodiment the predictive code model component 180 receives integer sequences 128 for communication transcripts 1 14 to predicting outcome codes. The trained predictive code model can generate outcome codes based on the message list 139 for a communication transcript 1 14. In an embodiment, the predictive code model component 180 receives integer sequences 128 for transcripts from testing interaction data 1 17 to predict outcome codes. The trained predictive code model(s) can generate outcome codes based on the message list 139 for the transcripts of the testing interaction data 1 17.

[0042] In one embodiment, the predictive code model is a deep learning model or a statistical model. In various embodiments, the deep learning model is a Recurrent Neural Network (RNN) based model or a Transformer based model. In an embodiment, the statistical model utilizes a Multi-Label K-Nearest Neighbor (ML-KNN) algorithm. It should be understood that dependent on the model used some training parameters may be set by the user, such as, but not limited to the maximum number of epochs, the performance metrics to monitor, and the number of epochs to wait before stopping where no performance improvements are made.

[0043] The wrap-up system includes a wrap-up assignment component 195 for assigning and/or presenting generated contact notes and generated outcome codes to communication data including communication recordings 1 13. The wrap-up assignment component 195 receives communication transcripts 1 14 from the predictive note model component 170 along with the generated contact notes and corresponding communication recordings 113 from the interaction processing component 120. The wrap-up component also receives the same communication transcripts 1 14 from the predictive code model component 180 along with the generated outcome codes for the communication transcripts. In an embodiment the wrap-up assignment component 195 automatically assigns the generated contact notes and generated outcome codes to the communication recording 113 corresponding to the communication transcript 1 14 for which the outcome codes and contact notes were generated. The wrap-up assignment component may then display the contact data including the communication transcript 114, assigned contact notes, and assigned outcome codes on a user device 160 for review and/or store the contact data including the communication recording 1 13, the communication transcript 1 14, assigned contact notes, and assigned outcome codes in a wrap-up database 162 for later review and analysis.

[0044] In an embodiment, rather than automatedly assigning the generated contact notes and outcome codes to the communication recording 1 13, the wrap-up assignment component 195 displays the suggested communication data including the communication transcript 1 14, the generated contact notes, and generated outcome codes on a user device 160 for approval and/or editing by a customer service agent. In this embodiment, the wrap-up assignment component 195 provides options to the customer service agent to accept the generated outcome codes and/or wrap-up notes or reject the generated outcome codes and/or wrap-up notes. If the customer service agent accepts the outcome codes and/or wrap-up notes the wrap-up assignment component 195 assigns them to the communication recording as described above. If the customer service agent rejects the contact notes and/or outcome codes, the wrap-up assignment component 195 allows the customer service agent to edit, change and modify the generated contact notes and/or outcome codes prior to assigning them to the communication recording 113 as described above.

[0045] In an embodiment, the wrap-up assignment component 195 can trigger retraining of the predictive code model(s) and/or retraining of the predictive note models. In an embodiment, if the customer service agents reject generated outcome codes above a threshold amount (e.g., 50% rejection, 60% rejection, etc.) a retraining of the predictive code model(s) is triggered using updated historical interaction data 104 for training. In an embodiment, if the customer service agents reject generated contact notes above a threshold amount (e.g., 50% rejection, 60% rejection, etc.) a retraining of the predictive note model(s) is triggered using updated historical interaction data 103 for training.

[0046] In an embodiment, the wrap-up assignment component 195 receives testing interaction data 1 17 from the interaction processing component 120 and contact notes and outcome codes generated by the predictive note model component 170 and the predictive code model component 180 corresponding to the testing interaction data. The wrap-up assignment component 195 compares the generated contact notes for each testing interaction data 1 17 to the historical contact notes 106 already associated with the testing interaction data 1 17 to score the accuracy of the predictive note model. If the accuracy of the predictive note model is under an accuracy threshold (e.g., 75% accurate, 85% accurate, etc.) a retraining of the predictive note model is triggered using updated historical interaction data 104 for training. The wrap-up component also compares the generated outcome codes for each testing interaction data 1 17 to the historical outcome codes 107 already associated with the testing interaction data 1 17 to score the accuracy of the predictive code model. If the accuracy of the predictive code model is under an accuracy threshold (e.g., 75% accurate, 85% accurate, etc.) a retraining of the predictive code model is triggered using updated historical interaction data 104 for training.

[0047] It should further be understood that the predictive code model(s) and predictive note model(s) may be periodically or continuously retrained or updated based on the receipt of new historical interaction data 104 regardless of accuracy threshold scores or threshold amount scores determined by the wrap-up assignment component 195.

[0048] The historical interaction data 104, historical recordings 105, historical contact notes 106, historical outcome codes 107, training transcripts 1 10, training contact notes 1 11 , training outcome codes 1 12, contact data, communication recordings 1 13, communication transcripts 1 14, test interaction data 1 17, vocabulary 126, integer sequences 128, integer IDs 129, binarized labels 131 , target set of parameters 133, baseline models 136, resampled data 137, message list 139, vector embeddings 151 , predictive note model(s), predictive code model(s), generated contact notes, generated outcome codes, threshold scores, and/or threshold accuracy scores may be stored in a storage component 190 for later use.

[0049] Figures 2A and 2B depict an example of a method 200 for auto-generating contact notes for an interaction. Blocks 202 through 216 form the data preprocessing blocks. Blocks 218 through 222 form the training blocks. Blocks 224 through 234 form the model application blocks. The numbering and sequencing of the blocks are for reference only; blocks or sequences of blocks may be performed out of order or repeated.

[0050] In block 202, the wrap-up system 100 launches the interaction processing component 120. The interaction processing component 120 is launched before or simultaneously with when historical interaction data 104 is received.

[0051 ] In block 204, the wrap-up system 100 receives historical interaction data 104 from an interaction database 102 or in real-time from the customer engagement center system at the interaction processing component 120 and generates training interaction data for the historical interaction data 104 including training transcripts 1 10 and training contact notes 11 1.

[0052] In block 206, the interaction processing component uses a transcription component 121 and generates training transcripts 1 10 from the historical recordings 105 associated with the historical interaction data 104. In certain embodiments, the transcription component 121 may segment the historical recordings 105 into a series of utterances, which are segments of audio data that are likely to be speech separated by segments of audio data that are likely to be non-speech segments such as silence or nonspeech noise that exceed a threshold amount of time, for example, 3 seconds. In certain embodiments, the LVCSR may be specialized for words, phrases, and terms in a specific industry, technical, or scientific field, specific to a language or a dialect, or expected in the historical interaction data 104 to ensure that the proper terms and interpretations are prioritized. In an embodiment, the generated training transcripts 1 10 may be generated from historical interaction data 104 received in real-time or from historical interaction data 104 from a certain timeframe, locale, or other desired parameter or parameters.

[0053] In block 208, the interaction processing component 120, normalizes the training transcripts 1 10 and training contact notes 1 1 1 associated with the historical contact notes 106 of the historical interaction data 104 using a normalizing component 122. In an embodiment, the normalizing component 122 removes punctuation and markups in the training transcripts 1 10 and training contact notes 1 11 to create a standard format.

[0054] In optional block 210, the interaction processing component 120, using a division component 123, splits off testing interaction data 117 from the training interaction data. In an embodiment, to ensure the trained model(s) appropriately generalizes to unseen cases, the interaction processing component splits off testing interaction data 1 17, including testing transcripts and testing contact notes from the training interaction data, including training transcripts 1 10 and training contact notes 1 1 1. The model will not be given the testing interaction data 1 17 during training. Instead, the testing interaction data 1 17 can be used later to evaluate the performance of the trained model.

[0055] In block 212, the interaction processing component 120, using a vocabulary component 125, builds a vocabulary 126 from both the training transcripts 1 10 and the training contact notes 1 1 1. In certain embodiments, the vocabulary component 125 uses a tokenizer to build the vocabulary to cover the most used words in the training transcripts 110 and the training contact notes 1 1 1.

[0056] In block 214, the interaction processing component 120 uses a language preprocessor 150 to encode and obtain vector embeddings 151 for the vocabulary 126. The language preprocessor transforms the vocabulary 126 into vector embeddings 151. In an embodiment, the language preprocessor may be a Bidirectional Encoder Representations from Transformers (BERT). In another embodiment, the language preprocessor may be BERT-as-service, run either locally or remotely. In another embodiment, the BERT-as-service may be run on a graphics processing unit (GPU) capable of running the TensorFlow open-source software library.

[0057] In block 216, the interaction processing component 120, using an integer conversion component 127, converts the training transcripts 1 10 and the training contact notes 1 1 1 to integer sequences 128. Each word in the vocabulary 126 is assigned an integer ID 129 and the integer conversion component 127 converts the text of both the training transcripts 110 and the training contact notes 1 1 1 to integer sequences 128 using these integer IDs 129. The integer sequences 128 are used in subsequent modeling.

[0058] In block 218, the wrap-up system 100 builds a predictive note model using the predictive note model component 170. In one embodiment, the predictive note model is a sequence-to-sequence model. In various embodiments, the sequence-to-sequence model may include an encoder-decoder with attention model or a BERT text summarization with a pre-trained encoder (also known as a pre-trained BERTSUM or PreSumm). [0059] In optional block 220, the wrap-up system 100 uses the interaction processing component 120 to run a parameter tuning algorithm with parameter tuning component 132. Each predictive note model type disclosed has different parameters, which should be tuned to provide improved model training results. The system runs a parameter tuning algorithm to find the target set of parameters 133 and data to use in training the predictive note model(s).

[0060] In block 222, the predictive note model component 170 receives vector embeddings 151 and integer sequences 128 for the training data, including training transcripts 110 and training contact notes 111 and trains the predictive note model(s) optionally based on the target set of parameters 133.

[0061] In block 224, the interaction processing component 120 receives communication data from the interaction database 102 or in real-time from the customer engagement center system and generates and cleans the communication transcript 114 from the communication recording 113 associated with the communication data. The communication data is an interaction between an agent and a customer. The communication data may be from a completed or ongoing interaction. The method encompassed by blocks 204, 206, 208, and 216 is used to generate and process the communication transcript 114.

[0062] In block 226, predictive note model component 170 receives the integer sequences for the communication transcript 114 and runs the trained predictive note model to generate a contact note for the communication transcript 114.

[0063] In block 228, a wrap-up assignment component 195 receives the contact data including the communication recording 113 and the communication transcript 114 and receives the contact note associated with the communication transcript 114 generated by the predictive note model.

[0064] In optional block 230, the wrap-up assignment component 195 displays the communication transcript 1 14 and the generated contact note on a user device 160 to a customer service representative or other user for approval of the contact note or editing of the contact note.

[0065] In block 232, the wrap-up assignment component 195 assigns the generated contact note to the communication data associated with the communication transcript 1 14. If optional block 230 is used, the assignment of the contact note to the communication data is based on the user’s approval and/or edits or corrections. If optional block 230 is not used, the assignment of the contact note to the communication data is automatic and a direct assignment of the contact note generated by the predictive note model component.

[0066] In block 234, the wrap-up system stores the contact notes with the associated communication data in a wrap-up database 162 and/or the storage component 190.

[0067] Figures 3A and 3B show a flowchart of an example of a method 300 for auto-generating outcome codes for an interaction. Blocks 302 through 324 form the data preprocessing blocks. Blocks 326 through 330 form the training blocks. Block 332 through 344 form the model application blocks. The numbering and sequencing of the blocks are for reference only; blocks or sequences of blocks may be performed out of order or repeated. [0068] In block 302, the wrap-up system 100 launches the interaction processing component 120. The interaction processing component 120 is launched before or simultaneously with when the historical interaction data 104 is received.

[0069] In block 304, the interaction processing component 120 receives the historical interaction data 104 from an interaction database 102 or in real-time form the customer engagement center system and generates training interaction data for the historical interaction data 104 including training transcripts 110 and training outcome codes 1 12.

[0070] In block 306, the interaction processing component 120 uses a transcription component 121 and generates training transcripts from the historical recordings 105 associated with the historical interaction data 104. In certain embodiments, the transcription component 121 may segment the historical recordings 105 into a series of utterances, which are segments of audio data that are likely to be speech separated by segments of audio data that are likely to be non-speech segments such as silence or non-speech noise that exceed a threshold amount of time, for example, 3 seconds. In certain embodiments, the LVCSR may be specialized for words, phrases, and terms in a specific industry, technical, or scientific field, specific to a language or a dialect, or expected in the historical interaction data 104 to ensure that the proper terms and interpretations are prioritized. In an embodiment, the generated training transcripts 1 10 may be generated from historical interaction data 104 received in real-time or from historical interaction data 104 from a certain timeframe, locale, or other desired parameter or parameters. [0071 ] In block 308, the interaction processing component 120 normalizes the training transcripts 1 10 using a normalizing component 122. In an embodiment, the normalizing component 122 removes punctuation and markups in the training transcripts

1 10 to create a standard transcript format.

[0072] In optional block 310, the interaction processing component 120 uses a clustering component 124 to cluster the training data to address inaccuracies and inconsistencies in the training outcome codes 1 12 associated with the historical outcome codes 107 associated with the historical recordings 105 by assigning new training outcome codes 1 12 to the unaltered training data. Clustering the training data will identify outcome codes and examine performance across the outcome codes. The determination to cluster the data and assign new training outcome codes may be made manually by a user or by the system if a threshold is exceeded.

[0073] In block 312, the interaction processing component 120, using a vocabulary component 125, builds a vocabulary 126 from the training transcripts 1 10. In certain embodiments, the vocabulary component 125 uses a tokenizer to build the vocabulary 126 to cover the most used words in the training transcripts 1 10.

[0074] In block 314, the interaction processing component 120 to encode and obtain vector embeddings 151 for the vocabulary 126. The language preprocessor transforms the vocabulary 126 into vector embeddings 151. In an embodiment, the language preprocessor may be a Bidirectional Encoder Representations from Transformers (BERT). In another embodiment, the language preprocessor may be BERT- as-service, run either locally or remotely. In another embodiment, the BERT-as-service may be run on a graphics processing unit (GPU) capable of running the TensorFlow open- source software library.

[0075] In block 316, the interaction processing component 120, using an integer conversion component 127, converts the training transcripts 1 10 to integer sequences 128. Each word in the vocabulary 126 is assigned an integer ID 129 and the integer conversion component 127 converts the text of the training transcripts 1 10 to integer sequences 128 using these integer IDs 129. The integer sequences 128 are used in subsequent modeling.

[0076] In block 318, the interaction processing component 120 converts the training outcome codes 1 12 to binarized labelsl 31 using the binarizing component 130. The binarizing component 130 converts the training outcome codes 1 12 associated with a training transcript 1 10 to a binary representation the model can process. In certain embodiments, the binarizing component 130 is a multilabel binarizer.

[0077] In block 320, the interaction processing component 120, using a division component 123, splits off testing interaction data 117 from the training interaction data. In an embodiment, to ensure the final trained model(s) appropriately generalize to unseen cases, the interaction processing component 120 splits off testing interaction data 117, including training transcripts 1 10 and training outcome codes 112. The model will not be given the testing interaction data 1 17 during training. Instead, the testing interaction data 1 17 can be used later to evaluate the performance of the model(s).

[0078] In optional block 322, the interaction processing component 120 uses the imbalance component 134 to determine if the training outcome codes 1 12 are imbalanced. If some outcome codes (also known as labels) are assigned in the set of training outcome codes 1 12 significantly more than others, then the model may become biased towards these. To prevent this, the system will compare the distribution of training outcome codes 1 12 to a predetermined balance threshold to determine if the data is imbalanced. In one embodiment, a training outcome code 1 12 is imbalanced if it has a mean imbalance ratio greater than 1 .5 and a coefficient of variation of an imbalance ratio per outcome code greater than 0.2. In other embodiments, the predetermined balance threshold is specific to the system, the model, or the intended industry. If the data is imbalanced, the interaction processing component 120 will perform resampling as described in blocks 324 and 326 to balance the distribution of outcome codes.

[0079] In optional block 324, if the training outcome codes 1 12 are imbalanced, the interaction processing component 120 uses a resampling component 135 to resample the training data using a resampling algorithm to create equality across the various sets of training outcome codes 1 12. In an embodiment, the resampling algorithm is Label Powerset Random Undersampling (LP-RUS), Label Powerset Random Oversampling (LP-ROS), Multilabel Random Undersampling (ML-RUS), and/or Multilabel Random Oversampling (ML-ROS) algorithms. The RUS and ROS algorithms determine how to resample by considering the outcome code sets of the dataset, with RUS algorithms eliminating instances that appear more frequently in the training outcome code sets and ROS algorithms replicating instances that appear less frequently in the training outcome code sets to create equality across training outcome code sets. Such algorithms can be used in various combinations to create multiple outcome code sets, which may train multiple models. Blocks 322 and 324 may be repleaded until the training outcome codes 1 12 are balanced. This relabeled training data may then be used in subsequent blocks of the method 300.

[0080] In block 326, the wrap-up system 100 builds a predictive code model using the predictive code model component 180. In one embodiment, the predictive code model used is a deep learning model or a statistical model. In various embodiments, the deep learning model is a Recurrent Neural Network (RNN) based model or a Transformer based model. In an embodiment, the statistical model utilizes a Multi-Label K-Nearest Neighbor (ML-KNN) algorithm.

[0081 ] In optional block 328, the wrap-up system 100 uses the interaction processing component 120 to run a parameter tuning algorithm with a parameter tuning component 132. Each predictive code model type disclosed has different parameters, which should be tuned to provide improved model training results. The system runs a parameter tuning algorithm to find the target set of parameters 133 and data to use in training the predictive code model(s). The data for the target set of parameters 133 can be selected from unaltered, resampled, relabeled, or resampled and relabeled data sets generated by the previous blocks.

[0082] In block 330, the predictive code model component 180 receives vector embeddings 151 , integer sequences 128 for the training data and training outcome codes 1 12 for the training data (the training data may be unaltered, resampled, relabeled, or resampled and relabeled) and trains the predictive code model(s) optional based on the target set of parameters 133.

[0083] In block 332, the interaction processing component 120 receives communication data from the interaction database 102 or in real-time from the customer engagement center system and generates and cleans the communication transcript 1 14 from the communication recording 1 13 associated with the communication data from an interaction. The communication transcript 1 14 may be from a completed or ongoing interaction. The method encompassed by blocks 306, 308, and 316 is used to process the communication transcript 1 14.

[0084] In optional block 334, the interaction processing component 120 utilizes a message component 138 to add cleaned contact transcripts 1 14 to the messages list 139. This block builds a message list 139 to be passed to the predictive code model.

[0085] In block 336, the predictive code model component 180 receives the integer sequences 128 for the communication transcript 1 14 and runs the trained predictive code model to generate outcome codes for the communication transcript 1 14.

[0086] In block 338, a wrap-up assignment component 195 receives the communication data including the communication recording 1 13 and the communication transcript 1 14 and receives the outcome codes associated with the communication transcript 114 generated by the predictive code model.

[0087] In optional block 340, the wrap-up assignment component 195 displays the communication transcript 1 14 and the generated outcome codes on a user device 160 to a customer service representative or other user for approval of the outcome codes or editing of the outcome codes. Editing of the outcome codes may include deletion of a suggested outcome code.

[0088] It should be understood that for blocks 332-340 the wrap-up system 100 does not necessarily wait for the interaction to be completed prior receiving and processing the communication data and generating and presenting predicted outcome codes. The wrap-up system 100 may receive communication data in real time and periodically engage the predictive code model throughout the interaction. The wrap-up system 100 may further present the generated outcome codes on the user device periodically and in real-time as they are generated.

[0089] In block 342, the wrap-up assignment component 195 assigns the generated outcome codes to the communication data associated with the communication transcript 1 14. If optional block 340 is used, the assignment of the outcome codes to the communication data is based on the user’s approval and/or edits or corrections. If optional block 340 is not used, the assignment of the outcome codes to the communication data is automatic and a direct assignment of the outcome codes generated by the predictive code model component 180.

[0090] In block 344, the wrap-up system 100 stores the assigned outcome codes with the associated communication data in a wrap-up database 162 and/or the storage component 190.

[0091 ] Figures 4A and 4B depict a flowchart of an example of a method for autogenerating wrap up information for an interaction, including training a predictive note model to generate contact notes for an interaction and training a predictive code model to generate outcome codes for an interaction, according to certain embodiments. The numbering and sequencing of the blocks are for reference only; blocks or sequences of blocks may be performed out of order or repeated.

[0092] In block 402, the wrap-up system 100 at an interaction processing component 120 receives historical interaction data 104 from an interaction database 102. In an embodiment the historical interaction data 104 may be received in real-time from the customer engagement center system and/or from the interaction database 102. In an embodiment, the historical interaction data 104 is a set of interactions between customers and customer service agents with each historical interaction data 104 including a historical recording 105, historical contact notes 106, and one or more historical outcome codes.

[0093] In block 404, the interaction processing component 120 generates training data from the historical interaction data 104 including generating training transcripts 1 10 from historical recordings 105 in the historical interaction data 104. The interaction processing component 120 may use a transcription component 121 to generate the training transcripts 1 10. In addition to generating training transcripts 1 10, the interaction processing component 120 may rename the received historical recording 105 as training recordings, historical contact notes 106 as training contact notes 1 11 , and historical outcome codes 107 as training outcome codes 1 12.

[0094] In block 406, the interaction processing component 120 builds a vocabulary 126 of the words in the training transcripts 1 10 and convert the vocabulary 126 to integer IDs 129. In an embodiment, the interaction processing component 120 uses the words of the training contact notes 1 1 1 to also build the vocabulary 126. The interaction processing component 120 may use a vocabulary component 125 and a language preprocessor 150 to construct the vocabulary 126 and integer IDs 129. The interaction processing component 120 may use a normalizing component to first clean and normalize the training transcripts 1 10 and the training contact notes 1 11 prior to building the vocabulary 126. [0095] In block 408, the interaction processing component 120 uses the language preprocessor 150 to generate vector embeddings 151 for the vocabulary 126.

[0096] In block 410, the interaction processing component 120 uses an integer conversion component 127 to convert the training transcripts 1 10 and training contact notes 1 1 1 associated with each of the training data to integer sequences 128 based on the integer IDs 129 for the vocabulary 126.

[0097] In block 412, the interaction processing component 120 uses a binarizing component 130 to convert the training outcome codes 1 12 associated with each of the of training data to a binarized label 131 .

[0098] In optional block 414, the interaction processing component 120 uses a division component 123 to split off testing interaction data 117 from the training interaction data, including a subset of the training transcripts 110, the training contact notes 1 1 1 , and the training outcome codes 1 12. The testing interaction data 1 17 is not used in training the models but is used to test the accuracy of the training of the models.

[0099] In optional block 416, the interaction processing component 120 uses a clustering component 124, an imbalance component 134, and a resampling component 135 to balance the training data according to the training outcome codes 1 12.

[0100] In optional block 418, the interaction processing component 120 uses a parameter tuning component 132 to identify a target set of parameters 133 for each of the models.

[0101] In block 420, a predictive note model component 170 trains a predictive note model with the vector embeddings 151 , the integer sequences 128 for the training transcripts 1 10, and the integer sequences 128 for the training contact notes 1 1 1. Once trained the predictive note model can receive a communication transcript 114 (e.g., the transcript for a “new” interaction) and generate a contact note for the communication transcript 114.

[0102] In block 422, a predictive code model component 180 trains a predictive code model with the integer sequences 128 for the training transcripts 1 10, the vector embeddings 151 , and the binarized labels 131 for the training outcome codes 1 12. Once trained the predictive code model can receive a communication transcript 1 14 (e.g., the transcript for a “new” interaction) and generate one or more outcome codes for the communication transcript 1 14.

[0103] In block 424, the predictive code model component 180 passes an integer sequence 128 for a communication transcript 1 14 to the predictive code model and the predictive note model component 170 passes the integer sequence 128 for the communication transcript 1 14 to the predictive note model. In an embodiment, the integer sequence 128 for the communication transcript 1 14 is generated by the interaction processing component 120 using the above-described blocks for generating and processing the training transcripts 1 10.

[0104] In block 426, the predictive code model component 180 generates one or more outcome codes for the communication transcript 1 14 using the predictive code model.

[0105] In block 428, the predictive note model component 170 generates a contact note for the communication transcript 1 14 using the predictive note model.

[0106] Figure 5 depicts an example diagram of a computer system 500 that may include the kinds of software programs, data stores, hardware, and interfaces that can implement a wrap-up system 100 as disclosed herein and according to certain embodiments. The computing system 500 may be used to implement embodiments of portions of the wrap-up system 100 or in carrying out embodiments of method 200, method 300, and/or method 400. The computing system 700 may be part of or connected to an overarching customer service center system.

[0107] As shown, the computer system 500 includes, without limitation, a memory 502, a storage 504, a central processing unit (CPU) 506, and a network interface 508, each connected to a bus 516. The computing system 500 may also include an input/output (I/O) device interface 510 connecting I/O devices 512 (e.g., keyboard, display, and mouse devices) and/or a network interface 508 to the computing system 500. Further, the computing elements shown in computer system 500 may correspond to a physical computing system (e.g., a system in a data center), a virtual computing instance executing within a computing cloud, and/or several physical computing systems located in several physical locations connected through any combination of networks and/or computing clouds.

[0108] Computing system 500 is a specialized system specifically designed to perform the steps and actions necessary to execute methods 200, 300, and 400 and wrap-up system 100. While some of the component options for computing system 500 may include components prevalent in other computing systems, computing system 500 is a specialized computing system specifically capable of performing the steps and processes described herein.

[0109] The CPU 506 retrieves, loads, and executes programming instructions stored in memory 502. The bus 516 is used to transmit programming instructions and application data between the CPU 706, I/O interface 510, network interface 508, and memory 502. Note, the CPU 506 can comprise a microprocessor and other circuitry that retrieves and executes programming instructions from memory 502. CPU 506 can be implemented within a single processing element (which may include multiple processing cores) but can also be distributed across multiple processing elements (with or without multiple processing cores) or sub-systems that cooperate in existing program instructions. Examples of CPUs 506 include central processing units, application-specific processors, and logic devices, as well as any other type of processing device, a combination of processing devices, or variations thereof. While there are a number of processing devices available to compromise the CPU 506, the processing devices used for the CPU 506 are particular to this system and are specifically capable of performing the processing necessary to execute methods 200, 300, and 400 and wrap-up system 100.

[01 10] The memory 502 can comprise any memory media readable by CPU 506 that is capable of storing programming instructions and able to meet the needs of the computing system 500 and execute the programming instructions required for methods 200, 300, and 400 and wrap-up system 100. Memory 502 is generally included to be representative of a random-access memory. In addition, memory 502 may include volatile and non-volatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions or program components. The memory 502 may be implemented as a single memory device but may also be implemented across multiple memory devices or sub-systems. The memory 502 can further include additional elements, such as a controller capable of communicating with the CPU 506. [01 11 ] Illustratively, the memory includes multiple sets of programming instructions for performing the functions of the wrap-up system 100 and methods 200, 300, and 400, including, but not limited to, interaction processing component 120, transcription component 121 , normalizing component 122, division component 123, clustering component 124, vocabulary component 125, integer conversion component 127, binarizing component 130, parameter tuning component 132, imbalance component 134, resampling component 135, message component 138, language preprocessor 150, predictive note model component 170, predictive code model component 180, storage component 190, and wrap-up assignment component 195, all of which are discussed in greater detail herein. Illustratively, the memory may also include a receiving component 530, a generating component 532, a building component 534, a converting component 536, a training component 538, and a passing component 540. Although memory 502, as depicted in Figure 5 includes twenty-three sets of programming instruction components in the present example, it should be understood that one or more components could perform single- or multi-component functions. It is also contemplated that these components of computing system 500 may be operating in a number of physical locations.

[01 12] The storage 504 can comprise any storage media readable by CPU 506 and is capable of storing data that is able to meet the needs of computing system 500 and store the data required for methods 200, 300, and 400 and wrap-up system 100. The storage 504 may be a disk drive or flash storage device. The storage 504 may include volatile and non-volatile, removable, and non-removable media implemented in any method or technology for storage of information. Although shown as a single unit, the storage 504 may be a combination of fixed and/or removable storage devices, such as fixed disc drives, removable memory cards, optical storage, network-attached storage (NAS), or a storage area-network (SAN). The storage 504 can further include additional elements, such as a controller capable of communicating with the CPU 506.

[01 13] Illustratively, the storage 504 may store data such as but not limited to historical interaction data 104, historical recordings 105, historical contact notes 106, historical outcome codes, training transcripts 1 10, training contact notes 1 1 1 , training outcome codes 1 12, communication recordings 1 13, communication transcripts 1 14, testing interaction data 1 17, vocabulary 126, integer sequences 128, integer IDs 129, binarized labels 131 , target set of parameters 133, resampled data 137, message list 139, vector embeddings 151 , training data 542, communication data 544, contact notes 546, outcome codes 548, predictive note models 550, predictive code models 552, threshold score 554, accuracy threshold score 556, accuracy threshold 558, and threshold amount 560, all of which are also discussed in greater detail herein.

[01 14] Examples of memory and storage media include random access memory, read-only memory, magnetic discs, optical discs, flash memory, virtual memory, and nonvirtual memory, magnetic sets, magnetic tape, magnetic disc storage, or other magnetic storage devices, or any other medium which can be used to store the desired software components or information that may be accessed by an instruction execution system, as well as any combination or variation thereof, or any other type of storage medium. In some implementations, one or both of the memory and storage media can be a non- transitory memory and storage media. In some implementations, at least a portion of the memory and storage media may be transitory. Memory and storage media may be incorporated into computing system 500. While many types of memory and storage media may be incorporated into computing system 500, the memory and storage media used is capable of executing the storage requirements of methods 200, 300, and 400 and wrap- up system 100 as described herein.

[01 15] The I/O interface 510 allows computing system 500 to interface with I/O devices 512. I/O devices 512 can include one or more user devices 160, graphical user interfaces, desktops, a mouse, a keyboard, a voice input device, a touch input device for receiving a gesture from a user, a motion input device for detecting non-touch gestures and other motions by a user, and other comparable I/O devices and associated processing elements capable of receiving input. The I/O devices 512 through the user devices 160 are also integrated into the customer service center system, telephone system, internet system, and a text communications system, among other systems. I/O devices 512 can also include devices such as a video display or graphical display and other comparable I/O devices and associated processing elements capable of providing output. Speakers, printers, haptic devices, or other types of output devices may also be included in the I/O device 512.

[01 16] A user can communicate with computing system 500 through the I/O device 512 in order to view historical interaction data 104, historical recordings 105, historical contact notes 106, historical outcome codes, training transcripts 1 10, training contact notes 1 11 , training outcome codes 1 12, communication recordings 1 13, communication transcripts 1 14, testing interaction data 1 17, vocabulary 126, integer sequences 128, integer IDs 129, binarized labels 131 , target set of parameters 133, resampled data 137, message list 139, vector embeddings 151 , training data 542, communication data 544, contact notes 546, outcome codes 548, predictive note models 550, predictive code models 552, threshold score 554, accuracy threshold score 556, accuracy threshold 558, and threshold amount 560 or complete any number of other tasks the user may want to complete with computing system 500. I/O devices 512 can receive and output data such as but not limited to historical interaction data 104, historical recordings 105, historical contact notes 106, historical outcome codes, training transcripts 1 10, training contact notes 1 11 , training outcome codes 1 12, communication recordings 1 13, communication transcripts 114, testing interaction data 1 17, vocabulary 126, integer sequences 128, integer IDs 129, binarized labels 131 , target set of parameters 133, resampled data 137, message list 139, vector embeddings 151 , training data 542, communication data 544, contact notes 546, outcome codes 548, predictive note models 550, predictive code models 552, threshold score 554, accuracy threshold score 556, accuracy threshold 558, and threshold amount 560.

[01 17] As described in further detail herein, computing system 500 may receive and transmit data from and to the network interface 508. In embodiments, the network interface 508 operates to send and/or receive data, such as but not limited to, historical interaction data 104, historical recordings 105, historical contact notes 106, historical outcome codes, training transcripts 1 10, training contact notes 1 1 1 , training outcome codes 112, communication recordings 1 13, communication transcripts 1 14, testing interaction data 1 17, vocabulary 126, integer sequences 128, integer IDs 129, binarized labels 131 , target set of parameters 133, resampled data 137, message list 139, vector embeddings 151 , training data 542, communication data 544, contact notes 546, outcome codes 548, predictive note models 550, predictive code models 552, threshold score 554, accuracy threshold score 556, accuracy threshold 558, and threshold amount 560 to/from other devices and/or systems to which computing system 500 is communicatively connected, and to receive and process interactions as described in greater detail above.

[01 18] It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on- a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine- readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter.

[01 19] Although certain implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited but may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be affected across a plurality of devices. Such devices might include personal computers, network servers, and handheld devices, for example. [0120] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

[0121 ] In the foregoing description, certain terms have been used for brevity, clearness, and understanding. No unnecessary limitations are to be inferred therefrom beyond the requirement of the prior art because such terms are used for descriptive purposes and are intended to be broadly construed. The different configurations, systems, and method steps described herein may be used alone or in combination with other configurations, systems, and method steps. It is to be expected that various equivalents, alternatives, and modifications are possible within the scope of the foregoing description.