Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR SEMANTIC-BASED PRE-TRAINING FOR DIALOGUE UNDERSTANDING
Document Type and Number:
WIPO Patent Application WO/2024/035469
Kind Code:
A1
Abstract:
Systems and methods for pre-training a dialogue model with semantic information include: generating a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker, learning core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph, learning semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph, learning an overall agreement of the input dialogue and the dialogue-level AMR graph, and training the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement.

Inventors:
SONG LINFENG (US)
Application Number:
PCT/US2023/023843
Publication Date:
February 15, 2024
Filing Date:
May 30, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TENCENT AMERICA LLC (US)
International Classes:
G06F40/30; G06F16/9032; G06N20/20; G06F18/214; G06F40/35; G06F40/56
Foreign References:
US20210150152A12021-05-20
US20220114346A12022-04-14
Other References:
BONIAL CLAIRE, DONATELLI LUCIA, ABRAMS MITCHELL, LUKIN STEPHANIE M, TRATZ STEPHEN, MARGE MATTHEW, ARTSTEIN RON, TRAUM DAVID, VOSS : "Dialogue-AMR: Abstract Meaning Representation for Dialogue", PROCEEDINGS OF THE TWELFTH LANGUAGE RESOURCES AND EVALUATION CONFERENCE, 11 May 2020 (2020-05-11), XP093141661
Attorney, Agent or Firm:
RABENA, John F. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS: 1. A method for pre-training a dialogue model with semantic information performed by at least one processor, the method comprising: generating a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker; learning core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph; learning semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph; learning an overall agreement of the input dialogue and the dialogue-level AMR graph; and training the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. 2. The method of claim 1, wherein generating the dialogue-level AMR graph for the input dialogue associated with the speaker comprises: building utterance-level AMR graphs by independently transforming utterances in the input dialogue into AMR using a pre-trained AMR parser; and connecting the utterance-level AMR graphs with a root node, wherein edges of the AMR graph are labeled with the associated speaker. 3. The method of claim 1, wherein learning the core semantic units of the input dialogue based on the nodes of the dialogue-level AMR graph comprises: identifying one or more semantic-aware units of the input dialogue based on the nodes of the dialogue-level AMR graph; and increasing an attention given by the dialogue model to tokens of the input dialogue that correspond to the one or more semantic-aware units. 4. The method of claim 3, wherein identifying the one or more semantic-aware units of the input dialogue based on the nodes of the dialogue-level AMR graph comprises: identifying a token of the input dialogue that is aligned with a node of the dialogue-level AMR graph as a semantic-aware unit of the input dialogue. 5. The method of claim 3, wherein increasing the attention given by the dialogue model to tokens of the input dialogue that correspond to the one or more semantic-aware units comprises: assigning a masking probability to each token of the input dialogue, wherein the masking probability assigned to a token corresponding to the one or more semantic-aware units is higher than the masking probability assigned to other tokens of the input dialogue. 6. The method of claim 1, wherein learning the semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph comprises: projecting the edges of the dialogue-level AMR graph onto a corresponding sentence of the input dialogue according to a node-to-word alignment; and training a predictor to generate the projected edges. 7. The method of claim 6, wherein training the predictor to generate the projected edges comprises: generating contextualized word hidden states of the input dialogue by using a Transformer encoder; and predicting relations between words based on the hidden states by using a deep biaffine neural parser. 8. The method of claim 1, wherein learning the overall agreement of the input dialogue and the dialogue-level AMR graph comprises: linearizing the dialogue-level AMR graph and using a pre-trained encoder to transform the linearized AMR into a set of hidden states; and maximizing a similarity score between the set of hidden states and the dialogue-level AMR graph. 9. The method of claim 8, wherein maximizing the similarity score between the set of hidden states and the dialogue-level AMR graph comprises: using a cosine similarity as a distance scoring operation; and adopting a contrastive learning framework to train the dialogue model. 10. The method of claim 1, wherein training the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement comprises: optimizing a total loss according to the equation ^௧^௧^^ = ^^^^_^^^ + ^^^^^ + ^^^^^, wherein ^ is a weighting hyper-parameters for ^^^^, and ^ is a weighting for ^^^^, wherein ^^^^_^^^ = − ∑^∈^ᇲ ^^^ ^(^^|^^), wherein ^^^^ = − ∑ழ ௫^,^̂^ೕ,௫ೕவ ∈ ா} ^^^ ^(^^^ ^^^|^)^(^^^ ^^ ^^ ̂ ^^ ^ |^), and ^௫^ 11. An the apparatus comprising: a memory storing computer programming code; and at least one processor configured to operate as instructed by the computer programming code, the computer programming code including: generating code configured to cause the at least one processor to generate a dialogue- level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker; learning code configured to cause the at least one processor to learn core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph, learn semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph, and learn an overall agreement of the input dialogue and the dialogue-level AMR graph; and training code configured to cause the at least one processor to train the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. 12. The apparatus of claim 1, wherein the generating code comprises : build code configured to cause the at least one processor to build utterance-level AMR graphs by independently transforming utterances in the input dialogue into AMR using a pre-trained AMR parser; and connect code configured to cause the at least one processor to connect the utterance-level AMR graphs with a root node, wherein edges of the AMR graph are labeled with the associated speaker. 13. The apparatus of claim 1, wherein the learn code comprises: identify code configured to cause the at least one processor to identify one or more semantic- aware units of the input dialogue based on the nodes of the dialogue-level AMR graph; and attention increasing code configured to cause the at least one processor to increase an attention given by the dialogue model to tokens of the input dialogue that correspond to the one or more semantic- aware units.

14. The apparatus of claim 13, wherein the identify code is configured to cause the at least one processor to identify a token of the input dialogue that is aligned with a node of the dialogue-level AMR graph as a semantic-aware unit of the input dialogue. 15. The apparatus of claim 13, wherein the attention increasing code is configured to cause the at least one processor to assign a masking probability to each token of the input dialogue, wherein the masking probability assigned to a token corresponding to the one or more semantic-aware units is higher than the masking probability assigned to other tokens of the dialogue. 16. The apparatus of claim 1, wherein the learning code is configured to cause the at least one processor to: project the edges of the dialogue-level AMR graph onto a corresponding sentence of the input dialogue according to a node-to-word alignment; and train a predictor to generate the projected edges. 17. The apparatus of claim 16, wherein the training code is configured to cause the at least one processor to: generate contextualized word hidden states of the input dialogue by using a Transformer encoder; and predict relations between words based on the hidden states by using a deep biaffine neural parser. 18. The apparatus of claim 1, wherein the learning code is configured to cause the at least one processor to: linearize the dialogue-level AMR graph and using a pre-trained encoder to transform the linearized AMR into a set of hidden states; and maximize a similarity score between the set of hidden states and the dialogue-level AMR graph. 19. The apparatus of claim 18, wherein the learning code is configured to cause the at least one processor to: use a cosine similarity as a distance scoring operation; and adopt a contrastive learning framework to train the dialogue model.

20. A non-transitory computer-readable recording medium having recorded thereon a computer program which, when executed by a processor, causes the processor to: generate a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker; learn core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph; learn semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph; learn an overall agreement of the input dialogue and the dialogue-level AMR graph; and train the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement.

Description:
SYSTEMS AND METHODS FOR SEMANTIC-BASED PRE-TRAINING FOR DIALOGUE UNDERSTANDING CROSS-REFERENCE TO RELATED APPLICATION This application is based on and claims priority to U.S. Patent Application No.17/887,134, filed on August 12, 2022, the disclosure of which is incorporated by reference herein in its entirety. 1. Field [0001] Apparatuses and methods consistent with example embodiments of the present disclosure relate to a semantic-based pre-training framework that leverages a deep semantic representation for dialogue pre-training. 2. Description of Related Art [0002] Semantic knowledge has been used for both social chat and task-oriented dialogue systems. For example, PEGASUS is a spoken language interface for on-line air travel planning that transforms a sentence into a semantic frame which is then used for travel planning. As another example, a semantic dialogue model may be used to perform database operations based on semantic features. As yet another example, conversational semantic parsing may be used to integrate intents and slots into a semantic tree and solve intent classification and slot-filling tasks as semantic parsing. Conversational semantic parsing may also be used to represent task-oriented dialogue as a semantic graph to perform dialogue state tracking. [0003] Although incorporating semantic information into dialogue systems has been shown to be helpful for many dialogue tasks, existing models are typically trained on surface dialogue text, and are proven to be weak in understanding the main semantic meaning of a dialogue context. Furthermore, these methods only focus on domain-specific benchmark data, leaving the general potentiality of semantic structures unexploited, and require either human annotations or an external parser to obtain semantic structures, raising costs and/or causing error propagation for real applications. SUMMARY [0004] According to various embodiments, systems and methods are provided for performing semantic-based pre-training for dialogue understanding. [0005] According to aspects of one or more example embodiments, a method for pre-training a dialogue model with semantic information, performed by at least one processor, includes: generating a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker, learning core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph, learning semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph, learning an overall agreement of the input dialogue and the dialogue-level AMR graph, and training the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. [0006] The method includes building utterance-level AMR graphs by independently transforming utterances in the input dialogue into AMR using a pre-trained AMR parser, and connecting the utterance-level AMR graphs with a root node, such that edges of the AMR graph are labeled with the associated speaker. [0007] The method includes identifying one or more semantic-aware units of the input dialogue based on the nodes of the dialogue-level AMR graph, and increasing an attention given by the dialogue model to tokens of the input dialogue that correspond to the one or more semantic-aware units. [0008] The method includes identifying a token of the input dialogue that is aligned with a node of the dialogue-level AMR graph as a semantic-aware unit of the input dialogue. [0009] The method includes assigning a masking probability to each token of the input dialogue, such that the masking probability assigned to a token corresponding to the one or more semantic-aware units is higher than the masking probability assigned to other tokens of the input dialogue. [0010] The method includes projecting the edges of the dialogue-level AMR graph onto a corresponding sentence of the input dialogue according to a node-to-word alignment, and training a predictor to generate the projected edges. [0011] The method includes generating contextualized word hidden states of the input dialogue by using a Transformer encoder, and predicting relations between words based on the hidden states by using a deep biaffine neural parser. [0012] The method includes linearizing the dialogue-level AMR graph and using a pre-trained encoder to transform the linearized AMR into a set of hidden states, and maximizing a similarity score between the set of hidden states and the dialogue-level AMR graph. [0013] The method includes using a cosine similarity as a distance scoring operation, and adopting a contrastive learning framework to train the dialogue model. [0014] According to aspects of one or more example embodiments, an apparatus for pre-training a dialogue model with semantic information includes: a memory storing computer programming code, and at least one processor configured to operate as instructed by the computer programming code, the computer programming code including: generating code configured to cause the at least one processor to generate a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker, learning code configured to cause the at least one processor to learn core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph, learn semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph, learn an overall agreement of the input dialogue and the dialogue-level AMR graph, and training code configured to cause the at least one processor to train the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. [0015] The apparatus includes build code configured to cause the at least one processor to build utterance-level AMR graphs by independently transforming utterances in the input dialogue into AMR using a pre-trained AMR parser; and connect code configured to cause the at least one processor to connect the utterance-level AMR graphs with a root node, wherein edges of the AMR graph are labeled with the associated speaker. [0016] The apparatus includes identify code configured to cause the at least one processor to identify one or more semantic-aware units of the input dialogue based on the nodes of the dialogue-level AMR graph; and attention increasing code configured to cause the at least one processor to increase an attention given by the dialogue model to tokens of the input dialogue that correspond to the one or more semantic-aware units. [0017] The apparatus is configured to cause the at least one processor to identify a token of the input dialogue that is aligned with a node of the dialogue-level AMR graph as a semantic-aware unit of the input dialogue. [0018] The apparatus includes attention increasing code that is configured to cause the at least one processor to assign a masking probability to each token of the input dialogue, such that the masking probability assigned to a token corresponding to the one or more semantic-aware units is higher than the masking probability assigned to other tokens of the dialogue. [0019] The apparatus includes learning code that is configured to cause the at least one processor to: project the edges of the dialogue-level AMR graph onto a corresponding sentence of the input dialogue according to a node-to-word alignment, and train a predictor to generate the projected edges. [0020] The apparatus includes training code is configured to cause the at least one processor to: generate contextualized word hidden states of the input dialogue by using a Transformer encoder, and predict relations between words based on the hidden states by using a deep biaffine neural parser. [0021] The apparatus includes learning code that is configured to cause the at least one processor to: linearize the dialogue-level AMR graph and using a pre-trained encoder to transform the linearized AMR into a set of hidden states, and maximize a similarity score between the set of hidden states and the dialogue-level AMR graph. [0022] The apparatus includes learning code that is configured to cause the at least one processor to: use a cosine similarity as a distance scoring operation, and adopt a contrastive learning framework to train the dialogue model. [0023] According to aspects of one or more example embodiments, a non-transitory computer- readable medium having recorded thereon a computer program which, when executed by a processor, causes the processor to generate a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker, learn core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph, learn semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph, learn an overall agreement of the input dialogue and the dialogue-level AMR graph, and train the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. [0024] Additional aspects will be set forth in part in the description that follows and, in part, will be apparent from the description, or may be realized by practice of the presented embodiments of the disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0025] Features, aspects and advantages of certain exemplary embodiments of the disclosure will be described below with reference to the accompanying drawings, in which like reference numerals denote like elements, and wherein: [0026] FIG.1 illustrates an AMR graph, in accordance with one or more example embodiments; [0027] FIG. 2a illustrates another AMR graph, in accordance with one or more example embodiments; [0028] FIG. 2b illustrates a semantic-guided masking strategy, in accordance with one or more example embodiments; [0029] FIG.2c illustrates a semantic relation prediction, in accordance with one or more example embodiments; [0030] FIG.2d illustrates a semantic agreement, in accordance with one or more example embodiments; [0031] FIG.3 illustrates a flowchart of a method for pre-training a dialogue model with semantic information, in accordance with one or more example embodiments; and [0032] FIG.4 illustrates a diagram of components of one or more devices, in accordance with one or more example embodiments. DETAILED DESCRIPTION [0033] The following detailed description of example embodiments refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. [0034] The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations. Further, one or more features or components of one embodiment may be incorporated into or combined with another embodiment (or one or more features of another embodiment). Additionally, in the flowcharts and descriptions of operations provided below, it is understood that one or more operations may be omitted, one or more operations may be added, one or more operations may be performed simultaneously (at least in part), and the order of one or more operations may be switched. [0035] It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code. It is understood that software and hardware may be designed to implement the systems and/or methods based on the description herein. [0036] Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set. [0037] No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” “include,” “including,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Furthermore, expressions such as “at least one of [A] and [B]” or “at least one of [A] or [B]” are to be understood as including only A, only B, or both A and B. [0038] As set forth above, the related art trains models on surface dialogue text, focusses only on domain-specific benchmark data, and require either human annotations or an external parser. Thus, the related art do not exploit the general potentiality of semantic structures, have increased cost, may cause error propagation, and are proven to be weak in understanding the main semantic meaning of a dialogue context. [0039] Example embodiments provide a system and method that perform semantic-based pre- training for dialogue understanding. According to an aspect of the disclosure, a semantic-based pre- training framework is provided, that leverages a deep semantic representation for dialogue pre-training. For instance, a semantic-graph-based pre-training (SARA) framework for dialogues may extend a standard pre-training framework by three tasks for learning core semantic units, semantic relations, and the overall semantic representation according to abstract meaning representation (AMR) graphs. The SARA framework may enhance a pre-trained dialogue model with semantic information during pre- training. This may be accomplished by using AMR as explicit semantic knowledge/structure for more fine-grained supervision when pre-training the model, to capture the core semantic information in dialogues during the pre-training. In this way, the model’s ability to infer semantic structures from conversations is improved, and the model does not require an external AMR parser in downstream applications. [0040] FIG.1 illustrates an AMR graph, in accordance with one or more example embodiments. As shown in FIG.1, for the sentence: “The police hummed to the boy as he walked to town.” the AMR highlights core semantic units (e.g., “police”, “hum”, “boy”) in the sentence and connects them with semantic relations (e.g., “:arg0”, “:time”) using a rooted directed graph. [0041] According to an aspect of the disclosure, AMR graphs may be explicitly leveraged for pre-training a dialogue model. For example, the SARA framework may include three pre-training sub- tasks: 1) semantic-based mask language modeling (MLM), 2) semantic relation prediction, and 3) semantic agreement. The semantic-based mask language modeling sub-task may extend a standard mask language modeling task by increasing attention to core semantic units in a dialogue in order to learn the core semantic units; the semantic relation prediction sub-task may learn the semantic relations between words; and the semantic agreement sub-task may learn an overall agreement of a dialogue and its corresponding AMR graph. In this way, the SARA framework combines the strengths of contextualized representation of pre-trained models and explicit semantic knowledge, while eliminating the requirement of an external semantic parser in downstream applications. [0042] According to an aspect of the disclosure, a pre-trained dialogue model (e.g., a pre-trained Transformer encoder) may be used for continuing to pre-train the model on dialogue(s) in a multitask setting, using AMR(s) of the dialogue(s) as explicit semantic knowledge for the continued pre-training. The multitask setting may include the three pre-training sub-tasks (e.g., semantic-based masking, semantic relation prediction, and semantic agreement). [0043] According to an aspect of the disclosure, a dialogue-level AMR graph may be constructed for an input dialogue. For instance, a plurality of dialogue-level AMR graphs may be constructed for each of a plurality of input dialogues. Each input dialogue may include a plurality of sentences attributed to one or more speakers. The dialogue-level AMR graph may be constructed by building one or more utterance-level AMR graphs and connecting the utterance-level AMR graphs to a root node, where edges are labeled with a corresponding speaker. The utterance-level AMR graphs may be generated by independently transforming utterances into AMR using a pre-trained AMR parser. [0044] For instance, an input dialogue sequence may be denoted as ^ = [^ ^ , ^ , ... , ^ ^ ], where ^ is a number of tokens in the dialogue. The corresponding AMR may be a graph ^ =< ^, ^ >, where ^ denotes a set of nodes (i.e., AMR concepts) and ^ denotes a set of labeled edges (i.e., AMR relations). An edge may be further represented by a triple < ^ ^ , ^ ^^ , ^ ^ >, where an edge from node ^ ^ to node ^ ^ has label ^ ^^ . [0045] to an aspect of the disclosure, a semantic-guided masking strategy may be implemented by the semantic-based MLM sub-task. The semantic-guided masking strategy may increase the attention given by the model to tokens that contain important semantic information, instead of treating all tokens equally and potentially wasting resources on tokens that provide little signal (e.g., punctuations, stop words). A token that contains important semantic information may be referred to as a semantic-aware unit (i.e., core semantic unit). A token may be identified as a semantic-aware unit when the token is aligned with an AMR node, according to an AMR-to-text alignment. [0046] FIG. 2a illustrates another AMR graph, in accordance with one or more example embodiments. The AMR graph in FIG.2 corresponds to the sentence: “The police could help the housewife.” where the tokens “police”, “could”, and “help” are aligned with a node of the AMR graph, and may be identified as semantic-aware units. [0047] FIG. 2b illustrates a semantic-guided masking strategy, in accordance with one or more example embodiments. As shown in FIG.2b, the semantic-guided masking strategy assigns a higher masking probability for tokens that contain important semantic information (e.g., “police”, “could”, “help”) compared to the other tokens (e.g., “The”, “the”). [0048] Since pre-trained models typically use a vocabulary with sub-word units, for an alignment pair < ^ ^ , ^ ^ >, the alignment may be extended as < ^ ^ , ^ ^ ^ ^ , ^ ^ ... , ^ ^ ^ ^ >, where the AMR node ^ ^ is aligned with a set of all tokens ^^ ^ ^ , ^ ^ ... , ^ ^ ^ ^ which are sub-words of word ^ ^ . For example, as shown in FIG.2a, the AMR node ``housewife'' is aligned with sub-tokens ``house'' and ``##wife''. [0049] The indices of tokens that are identified as semantic-aware units by the semantic-guided masking strategy may be denoted as ^′ = [^ ^ , ^ ଶ , ... , ^ ^ ], and the semantic-based MLM sub-task may be defined as an optimization of the training objective: ^ ^^^_^^^ = − ^ ^^^ ^(^ ^ |^^). ^∈^ [0050] According to an relation prediction sub-task may be designed to learn semantic relations between words. To this end, the edges of each input AMR graph (corresponding to an input dialogue) may be mapped onto the corresponding sentence according to a node-to-word alignment, before training a predictor to generate the projected edges. For instance, since AMR relations are defined on AMR nodes instead of words in the dialogue text, a node-to-word alignment ^ may be used to project the AMR edges ^ onto text with following rules: ̂ ^^^ = ^ r୧ᇲ୨ᇲ , if x୧ ∈ ^(^^ᇲ ), x୨ ∈ ^൫^^ᇲ ൯ [0051] In order to a Transformer encoder may be used to first generate contextualized word hidden states ℎ = [ℎ ^ , ℎ , ... , ℎ ^ ]. Based on the hidden states, a deep biaffine neural parser may be used to predict relations between words. For instance, to determine whether a directed edge (or arc) from x to x exists, the biaffine parser may use two separate multi-layer perceptrons (MLPs) (denoted as ^^^ and ^^^ ^ ) to obtain two lower-dimensional representation vectors for each position, and then calculate scores via a biaffine operation: ^ ^ , ^ ^ ^ = ^^^ (ℎ ^ ), ^^^ ^ (ℎ ^ ), , where ^ ^ is the representation vector of ^ ^ as a head word, ^ ^ ^ denotes the vector of ^ ^ as a dependent word, and ^ ^^^ is a parameter matrix, to calculate the probability of assigning a label to the arc (^, ^), which is denoted as ^(^ ^ ^ ^ ^^ |^). Thus, the semantic relation prediction sub-task may be defined as an optimization of the training objective: ^ ^^^ = − ^ ^^^ ^(^ ^ ^ ^ ^^ |^)^(^ ^ ^ ^ ^ ^ ^ ̂ ^^ ^ |^), ழ ௫ ^ ,^̂ ^ೕ ,௫ வ ∈ ா } where ^ represents the [0052] FIG.2c illustrates a semantic relation prediction, in accordance with one or more example embodiments. As shown in FIG.2c, the edges of the AMR graph in FIG.2a are mapped onto tokens of the corresponding sentence according to the node-to-word alignment. For example, the directed edge “:arg0” from node “help-01” to node “police” shown in FIG.2a is mapped from the token “help” to the token “police” in the corresponding sentence shown in FIG.2c. As another example, since the word “housewife” in FIG.2a corresponds to a first token “house” and a second token “##wife”, the edge “:arg1” from the node “help-01” to the node “housewife” is mapped from the token “help” to the tokens “house” and “##wife” in the corresponding sentence shown in FIG.2c. [0053] According to an aspect of the disclosure, the semantic agreement sub-task may be designed to encourage the model to learn the overall agreement of a dialogue and its corresponding AMR graph. For instance, an auxiliary network may be used to encode the AMR, and maximize the similarity score between hidden states of text and the AMR. To this end, AMR graphs may be linearized into a sequence, and a pre-trained encoder may be used to transform the AMR into a set of hidden states. The linearized AMR graph may be defined as ^ = [^ ^ , ^ , ... , ^ ^ ], and the vector representation of text and its corresponding AMR may be calculated as: ℎ ௧^௫௧ = ^^^^^^^(^^^^^^^(^)), ℎ ^^^ = ^^^^^^^(^^^^^^^(^)), where ^^^^^^^(^) is a text encoder and ^^^^^^^(^) is a AMR encoder. [0054] FIG.2d illustrates a semantic agreement, in accordance with one or more example embodiments. As shown in FIG.2d, a linearized AMR graph may be input to an AMR encoder (e.g., ^^^^^^^(^)), and the corresponding text may be input to a Text encoder (e.g., ^^^^^^^(^)). The encoders ^^^^^^^(^) and ^^^^^^^(^) may be initialized with the same weights but updated separately during training, and a ^^^^^^^ operation may be used to reduce the sequence of vectors into one vector. For instance, the hidden state of the first input token may be input into a MLP layer to obtain the ``pooled'' vector. [0055] According to an aspect of the disclosure, a cosine similarity may be used as a distance scoring operation and a contrastive learning framework may be adopted to train the model with the aim of pulling together semantically close text-AMR pairs and pushing apart unpaired examples. In particular, for a text ^, a positive example is its corresponding AMR graph ^, and negative examples are the AMR graphs of its neighbor dialogues in the corpus. By letting ℎ ^ ^௫௧ and ℎ ^ ^ ^^ denote the representations of the ^th example pair in the dataset, the semantic agreement sub-task may be defined as an optimization of the training objective: ^^^(^^^(ℎ ^ ௧^௫௧ , ℎ ^ ^ ^^ )/^) ^ ^^^ = − log ^ ^^ , where ^^^(⋅,⋅) denotes the the ^ example, and ^ > 0 hyper-parameter. [0056] According to an aspect of the disclosure, the model may be trained by optimizing the total loss of the above three sub-tasks: ^ ௧^௧^^ = ^ ^^^_^^^ + ^^ ^^^ + ^^ ^^^ where ^ and ^ are weighting hyper-parameters for ^ ^^^ and ^ ^^^ , respectively. In order to lower the computational requirements, a model that has been pre-trained on textual inputs may be used to continue the training. [0057] FIG.3 illustrates a flowchart of a method 300 for pre-training a dialogue model with semantic information, in accordance with one or more example embodiments. [0058] At 302, the method 300 includes generating a dialogue-level abstract meaning representation (AMR) graph for an input dialogue associated with a speaker. For instance, the method 300 may include building utterance-level AMR graphs by independently transforming utterances in the dialogue into AMR using a pre-trained AMR parser, and connecting the utterance-level AMR graphs with a root node, wherein edges of the AMR graph are labeled with the associated speaker. [0059] At 304, the method 300 includes learning core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph. For instance, the method 300 may include identifying one or more semantic-aware units of the input dialogue based on the nodes of the dialogue-level AMR graph, and increasing an attention given by the dialogue model to tokens of the input dialogue that correspond to the one or more semantic-aware units. The method 300 may further include identifying a token of the input dialogue that is aligned with a node of the dialogue-level AMR graph as a semantic-aware unit of the input dialogue. The method 300 may further include assigning a masking probability to each token of the input dialogue, such that the masking probability assigned to a token corresponding to the one or more semantic-aware units is higher than the masking probability assigned to other tokens of the input dialogue. [0060] At 306, the method 300 includes learning semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph. For instance, the method 300 may include projecting the edges of the dialogue-level AMR graph onto a corresponding sentence of the input dialogue according to a node-to-word alignment, and training a predictor to generate the projected edges. The method 300 may further include generating contextualized word hidden states of the input dialogue by using a Transformer encoder, and predicting relations between words based on the hidden states by using a deep biaffine neural parser. [0061] At 308, the method 300 includes learning an overall agreement of the input dialogue and the dialogue-level AMR graph. For instance, the method 300 may include linearizing the dialogue-level AMR graph and using a pre-trained encoder to transform the linearized AMR into a set of hidden states, and maximizing a similarity score between the set of hidden states and the dialogue-level AMR graph. The method 300 may further include using a cosine similarity as a distance scoring operation, and adopting a contrastive learning framework to train the dialogue model. [0062] At 310, the method 300 includes training the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. For instance, the method 300 may include optimizing a total loss according to the equation ^ ௧^௧^^ = ^ ^^^_^^^ + ^^ ^^^ + ^^ ^^^ . [0063] FIG.4 illustrates a diagram of components of one or more devices, in accordance with one or more example embodiments. Referring to FIG.4, the device 400 may include a bus 410, a processor 420, a memory 430, a storage component 440, and a communication interface 450. It is understood that one or more of the components may be omitted and/or one or more additional components may be included. [0064] The bus 410 includes a component that permits communication among the components of the device 400. The processor 420 is implemented in hardware, firmware, or a combination of hardware and software. The processor 420 is a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. The processor 420 includes one or more processors capable of being programmed to carry out operations. [0065] The memory 430 includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by the processor 420. [0066] The storage component 440 stores information and/or software related to the operation and use of the device 400. For example, the storage component 440 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non- transitory computer-readable medium, along with a corresponding drive. [0067] The communication interface 450 includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables the device 900 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. The communication interface 450 may permit device 400 to receive information from another device and/or provide information to another device. For example, the communication interface 450 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like. [0068] The device 400 may perform one or more processes or operations described herein. The device 400 may perform operations based on the processor 420 executing software instructions stored by a non-transitory computer-readable medium, such as the memory 430 and/or the storage component 440. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. [0069] Software instructions may be read into the memory 430 and/or the storage component 440 from another computer-readable medium or from another device via the communication interface 450. When executed, software instructions stored in the memory 430 and/or storage component 440 may cause the processor 420 to perform one or more processes described herein. [0070] Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, embodiments described herein are not limited to any specific combination of hardware circuitry and software. [0071] The number and arrangement of components shown in FIG.4 are provided as an example. In practice, device 400 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG.4. Additionally, or alternatively, a set of components (e.g., one or more components) of device 400 may perform one or more operations described as being performed by another set of components of device 400. [0072] In various embodiments of the present disclosure, any one of the operations or processes of FIGS. 1-3 may be implemented by or using any one of the elements illustrated in FIG.4. [0073] According to example embodiments, the device 400 may perform pre-training a dialogue model with semantic information. For instance, the device 400 may generate a dialogue-level AMR graph for an input dialogue associated with a speaker, learn core semantic units of the input dialogue based on nodes of the dialogue-level AMR graph, learn semantic relations between words of a sentence of the input dialogue based on edges of the dialogue-level AMR graph, learn an overall agreement of the input dialogue and the dialogue-level AMR graph, and train the dialogue model based on the learned core semantic units, semantic relations between words, and overall agreement. [0074] The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations. [0075] Some embodiments may relate to a system, a method, and/or a computer readable medium at any possible technical detail level of integration. Further, one or more of the above components described above may be implemented as instructions stored on a computer readable medium and executable by at least one processor (and/or may include at least one processor). The computer readable medium may include a computer-readable non-transitory storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out operations. [0076] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. [0077] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. [0078] Computer readable program code/instructions for carrying out operations may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects or operations. [0079] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the operations/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to operate in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the operation/act specified in the flowchart and/or block diagram block or blocks. [0080] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the operations/acts specified in the flowchart and/or block diagram block or blocks. [0081] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer readable media according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical operation(s). The method, computer system, and computer readable medium may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in the Figures. In some alternative implementations, the operations noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed concurrently or substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified operations or acts or carry out combinations of special purpose hardware and computer instructions. [0082] It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.