Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF TEXT TRANSLATION AND ELECTRONIC DEVICE CAPABLE THEREOF
Document Type and Number:
WIPO Patent Application WO/2009/036800
Kind Code:
A1
Abstract:
The present invention relates to a method of text translation comprising the steps of: capturing (401) a text image input, performing character recognition (402) on said text image input, such that a recognized text (5) is provided, selecting an extensive mode or a target mode (403), said extensive mode comprising the sub-steps of: performing linguistic analysis (501) in respect to said recognized text, translating (502) said recognized text, and selecting (503) a phrase, and said target mode comprising the sub-steps of: selecting (601) a phrase from said recognized text, performing linguistic analysis (602) in respect to said selected phrase, translating (603) said selected phrase (26, 36), executing the selected mode, and presenting (404) the corresponding translation of said selected phrase. The present invention also relates to an electronic device capable of such text translation and a computer program comprising software instructions that, when executed, performs such a text translation.

Inventors:
WANG HAO (CN)
LIU YING FEI (CN)
WANG KONG QIAO (CN)
KOTTZIEPER GUENTHER (CN)
Application Number:
PCT/EP2007/059883
Publication Date:
March 26, 2009
Filing Date:
September 19, 2007
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA CORP (FI)
WANG HAO (CN)
LIU YING FEI (CN)
WANG KONG QIAO (CN)
KOTTZIEPER GUENTHER (CN)
International Classes:
G06F17/28; G06V30/10
Domestic Patent References:
WO2003017229A12003-02-27
Foreign References:
US20010056352A12001-12-27
Other References:
DOERMANN D ET AL: "Progress in camera-based document image analysis", DOCUMENT ANALYSIS AND RECOGNITION, 2003. PROCEEDINGS. SEVENTH INTERNATIONAL CONFERENCE ON AUG. 3-6, 2003, PISCATAWAY, NJ, USA,IEEE, 3 August 2003 (2003-08-03), pages 606 - 616, XP010656833, ISBN: 0-7695-1960-1
WATANABE Y ET AL: "Translation camera on mobile phone", MULTIMEDIA AND EXPO, 2003. PROCEEDINGS. 2003 INTERNATIONAL CONFERENCE ON 6-9 JULY 2003, PISCATAWAY, NJ, USA,IEEE, vol. 2, 6 July 2003 (2003-07-06), pages 177 - 180, XP010650689, ISBN: 0-7803-7965-9
Attorney, Agent or Firm:
AWAPATENT AB (S- Göteborg, SE)
Download PDF:
Claims:

Claims

1. A method of text translation comprising the steps of: capturing (401 ) a text image input, performing character recognition (402) on said text image input, such that a recognized text (5) is provided, selecting an extensive mode or a target mode (403), said extensive mode comprising the sub-steps of: performing linguistic analysis (501 ) in respect to said recognized text, translating (502) said recognized text, and selecting (503) a phrase, and said target mode comprising the sub-steps of: selecting (601 ) a phrase from said recognized text, performing linguistic analysis (602) in respect to said selected phrase, translating (603) said selected phrase (26, 36), executing the selected mode, and presenting (404) the corresponding translation (27, 37) of said selected phrase.

2. A method in accordance with claim 1 , wherein said recognized text, said selected phrase, and said corresponding translation of said selected phrase are presented simultaneously.

3. A method in accordance with claim 1 , wherein said step of capturing a text image input comprises acquiring an image with a camera (8) comprised in a mobile communication terminal.

4. A method in accordance with claim 3, wherein said linguistic analysis and/or said translating is performed remotely from said mobile communication terminal.

5. A method in accordance with claim 1 , wherein said target mode further comprises the sub-steps of: performing linguistic analysis in respect to said recognized text, and translating a sentence comprising said selected phrase.

6. A method in accordance with claim 1 , further comprising the step of presenting the translation corresponding to a sentence comprising said selected phrase.

7. A method in accordance with claim 6, wherein a language model is utilized for interpretation.

8. A method in accordance with claim 1 , wherein said linguistic analysis comprises either or both maximum backward and forward phrase matching.

9. An electronic device (3) capable of text translation, comprising: means (8) for capturing a text image input, means (9) for performing character recognition on said text image input, such that a recognized text is provided, extensive mode translating means (9, 19) for: performing linguistic analysis in respect to said recognized text, translating said recognized text, and selecting a phrase, target mode translating means (9, 19) for: selecting a phrase from said recognized text, performing linguistic analysis in respect to said selected phrase, translating said selected phrase, means (13, 14) for selecting one of said extensive mode translating means and said target mode translating means to effect translation, and means (4, 15) for presenting the corresponding translation of said selected phrase.

10. An electronic device in accordance with claim 9, further comprising means for presenting said recognized text, said selected phrase, and said corresponding translation of said selected phrase simultaneously.

11. An electronic device in accordance with claim 9, wherein said means for capturing a text image input comprises a camera comprised in a mobile communication terminal.

12. An electronic device in accordance with claim 9, wherein said target mode translating means further are for: performing linguistic analysis in respect to said recognized text, and translating a sentence comprising said selected phrase.

13. An electronic device in accordance with claim 9, further comprising means for presenting the translation corresponding to a sentence comprising said selected phrase.

14. An electronic device in accordance with claim 9, wherein said selected phrase comprises one of a single word, a plurality of consecutive words, a single character or a plurality of consecutive characters.

15. A computer program comprising software instructions that, when executed, performs the method of claim 1.

Description:

Method of text translation and electronic device capable thereof

Technical Field of the Invention

The present invention relates to a method of text translation comprising capturing text, performing character recognition on the text and presenting a corresponding translation. The present invention also relates to an electronic device capable of such text translation and a computer program comprising software instructions that, when executed, performs such a text translation.

Background Art Communication devices have during the last decades evolved from being more or less primitive telephones, capable of conveying only narrow band analogue signals as voice conversations, into the multimedia mobile devices of today capable of conveying large amounts of data representing any kind of media. For example, a telephone in a GSM, GPRS, EDGE, UMTS or CDMA2000 type of system is capable of recording, conveying and displaying both still images and moving images, i.e. video streams, in addition to audio data such as speech or music.

Furthermore, internationalization is driving people to actively or passively use multiple languages in their daily lives. Thus language translation, or simply looking up in a dictionary, is a common but important procedure in many situations. For example, people often encounter new and unknown words when they read newspapers or magazines in a foreign language, or people do not know what the corresponding word is in a foreign language to the word in their native language. Optical character recognition (OCR) based applications integrated in camera-equipped mobile phones have, consequently, emerged during recent years. Such applications typically involve taking a snapshot of a piece of text and providing the digital image to a recognition engine running in the terminal or in a server connected to the terminal via a communication network. The translation of text captured in the snapshot is then, in one way or another, presented to the user on the display of the mobile phone.

The expectations with regards to text translation, however, differ from one use case to another. For instance, a user travelling in a country with a language of which he/she has no knowledge, most likely expects sentence-

level translations, or alternatively, phrase-level translation, which provides the user with a brief meaning of the foreign language sentences assisted by a phrase lexicon. Users essentially familiar with the foreign language, on the other hand, would prefer translations on a phrase or word level, pointing out the unrecognized word to be translated. Efficient selection of the target word/phrase to be translated and an accurate translation thereof should then naturally be prioritized rather than a more extensive translation.

It is, thus, challenging to meet both above mentioned use cases, and consequently there is a need for text translation fulfilling conflicting user expectations regarding the manner in which a text to be translated is selected, and the manner in which a corresponding translation result is presented.

Summary of the Invention It is therefore an object of the present invention to provide a method of text translation of the type mentioned by way of introduction, an electronic device capable of such text translation, and a software program comprising software instructions performing such a text translation, in which the above related drawbacks are eliminated wholly or at least partly. According to a first aspect of the invention, a method of text translation is provided, comprising the steps of: capturing a text image input, performing character recognition on the text image input, such that a recognized text is provided, selecting an extensive mode or a target mode, the extensive mode comprising the sub-steps of: performing linguistic analysis in respect to the recognized text, translating the recognized text, and selecting a phrase, and the target mode comprising the sub-steps of: selecting a phrase from the recognized text, performing linguistic analysis in respect to the selected phrase, translating the selected phrase, executing the selected mode, and presenting the corresponding translation of the selected phrase.

The invention likewise concerns an electronic device capable of such text translation. Thus, according to a second aspect of the invention, an electronic device capable of text translation is provided comprising: means for capturing a text image input, means for performing character recognition on the text image input, such that a recognized text is provided, extensive mode translating means for: performing linguistic analysis in respect to the recognized text, translating the recognized text, and selecting a phrase, target mode translating means for: selecting a phrase from the recognized text, performing linguistic analysis in respect to the selected phrase, translating the selected phrase, means for selecting one of said extensive mode translating means and said target mode translating means to effect translation, and means for presenting the corresponding translation of the selected phrase.

Additionally, the invention concerns a computer program, why according to a third aspect of the invention there is provided a computer program comprising software instructions that, when executed, performs text translation of the kind defined above. Thus, for text translation in accordance with the present invention, an option for selection of an extensive text translation mode or a target text translation mode is given, whereby the selected mode determines the extent of the translation to come.

The extensive mode is suitable for users confronted with text in a foreign language to which they are essentially unfamiliar, and for which text they therefore need to have an extensive translation in order to grasp an understanding of the content.

In extensive mode, the captured text is, after character recognition, linguistically analysed and each phrase of the recognized text is then translated. By performing translation of all the phrases of the recognized text in an initial stage, it is after completion of the translation possible to navigate through the phrases, with the corresponding translation options of the present

selected phrase immediately shifting along with each new selected phrase. Browsing through the phrases will give a user a preliminary understanding of the recognized text.

The target mode, on the other hand, is suitable for users confronted with text in a foreign language to which they are essentially familiar, and for which text they therefore do not need to have an extensive translation in order to grasp an understanding of the content. On the contrary, the user is most likely interested in translating a target word or phrase, which he/she is unfamiliar with, and not the translation of the entire recognized text, which is the case in the extensive mode as described above.

In target mode, the captured text is, after character recognition, not initially translated. On the contrary, a phrase is first selected, and not until after the selection is this phrase linguistically analysed and translated. The processing subsequently involves linguistic analysis only in respect to the selected phrase, not the entire recognized text, and thereby the processing time is shorter in target mode than in extensive mode.

Consequently, the ability to select different text translation modes in accordance with the present invention thus provides text translation suitable for use cases having different expectations with regards to translation of a text.

In order to obtain an optimal overview, the recognized text, the selected phrase, and the corresponding translation of the selected phrase are preferably, although not necessarily, presented simultaneously.

Implementing text translation in accordance with the present invention onto a mobile communication terminal, utilizing the terminal's integrated camera for capturing a text to be translated, provides an active shoot-to- translate solution for text analysis and phrase or sentence translation from which any user of the mobile communication terminal easily can benefit. It is thus preferred that capturing a text image input comprises acquiring an image with a camera comprised in a mobile communication terminal.

Furthermore, as an alternative to performing all steps locally within the mobile communication terminal, the linguistic analysis and/or the translating may likewise be performed remotely from said mobile communication terminal. The analysis and/or the translating may for instance be executed in a server or any other communication entity connected to the terminal via a communication network.

Regardless of if extensive or target mode is selected, it might be of interest to obtain a translation on a sentence level instead of on phrase level as described above. Opposed to phrase level, where corresponding translation options are provided for the selected phrase, a translation of the sentence comprising the selected phrase may be requested, not just the phrase. Consequently, the target mode may further comprise the sub-steps of performing linguistic analysis in respect to the recognized text, and translating a sentence comprising the selected phrase.

Additionally, the text translation method may further comprise the step of presenting the translation corresponding to a sentence comprising the selected phrase.

Furthermore, a language model is preferably, but not necessarily, utilized for interpretation, in order to optimize the translation on sentence level. The language model reforms phrases into an understandable sentence, which may then be presented.

In order to improve the linguistic analysis, the linguistic analysis may comprise either or both maximum backward and forward phrase matching. Performing both maximum backward and forward matching enables coverage of all possible phrases, as the resulting phrases are combined. Furthermore, in order to support characters of any language, the selected phrase may comprise one of a single word, a plurality of consecutive words, a single character or a plurality of consecutive characters.

Other aspects, benefits and advantageous features of the invention will be apparent from the following description and claims.

Brief Description of the Drawings

The invention will be more apparent from the accompanying drawings, which are provided by way of non-limiting examples.

Figure 1A illustrates a bulletin board of which a user of a mobile communication terminal capable of text translation in accordance with the present invention, takes a snapshot.

Figure 1 B schematically illustrates a functional block diagram of the exemplifying mobile communication terminal of Figure 1A.

Figure 2 illustrates a display of a mobile communication terminal capable of text translation in accordance with the present invention, showing recognized text in an extensive mode in accordance with a first embodiment.

Figure 3 illustrates the display of the mobile communication terminal in accordance with the first embodiment in Figure 2, showing recognized text in a target mode.

Figure 4 is a flowchart illustrating the operation of text translation in accordance with the first embodiment of the present invention, describing the operation of extensive mode as well as target mode.

Detailed Description of Preferred Embodiments of the Invention

In the following detailed description, preferred embodiments of the present invention will be described. However, it is to be understood that features of the different embodiments are exchangeable between the embodiments and may be combined in different ways, unless anything else is specifically indicated. It may also be noted that, for the sake of clarity, the dimensions of certain components illustrated in the drawings may differ from the corresponding dimensions in real-life implementations of the invention.

Figure 1A illustrates a user 2, who with the use of his/her mobile communication device 3 capable of text translation in accordance with the present invention, is able to acquire foreign text, which can be fully or partly translated. Although a mobile communication terminal represents the electronic device 3 capable of text translation in this example, the invention is not restricted thereto. Any suitable electronic device 3 can likewise be utilized. Furthermore, even though preferred, a user 2 is not required to carry out the inventive concept. The user's 2 actions described hereinafter may likewise be carried out by other means than by a human, based on criteria defined by the designer.

The mobile communication terminal 3 of Figure 1 A further comprises a camera 8, with which the user 2 can take a snap shot of a bulletin board 1 comprising text in a, for the user 2, foreign language. The bulletin board 1 is merely exemplifying, and any medium 1 showing text, which the user 2 can acquire with the use of an electronic device 3 according to the present invention, can be utilized with the inventive concept. Other examples are newspapers, magazines, signs and so on. The present invention is further not restricted to the use of a camera 8 for capturing the text to be translated, although this solution is preferred. The captured text is after character recognition presented as recognized text, which is further described in Figures 2 and 3, showing two different selectable modes, and the steps of Figure 4, hereinafter.

Figure 1 B shows a block diagram of an electronic device 3 in accordance with the present invention, in form of the exemplifying mobile communication terminal 3 in Figure 1A. The terminal 3 comprises a processing unit 9 connected to an antenna 10 via a transceiver 11 , a memory unit 12, a microphone 13, a keyboard and/or joystick and/or pointer 14, a speaker 15 and a camera 8.

No detailed description will be presented regarding the specific functions of the different blocks of the terminal 3. In short, however, as the person skilled in the art will realize, the processing unit 9 controls the overall function of the functional blocks in that it is capable of receiving input from the keyboard/joystick/pointer 14, audio information via the microphone 13, text image input via the camera 8 and receive suitably encoded and modulated data via the antenna 10 and transceiver 11. Each block is realized by SW/HW. The terminal 3 is typically in connection with a communication network

16 via a radio interface 17. As the skilled person will realize, the network 16 illustrated in Figure 1 B may represent any one or more interconnected networks, including mobile, fixed and data communication networks such as the Internet. A "generic" communication entity 18 is shown as being connected to the network 16. This is to illustrate that the terminal 3 may be communicating with any entity, including other electronic devices and data servers that are connected to the network 16. The generic communication entity 18 comprises a processing unit 19 connected to an antenna 20 via a transceiver 21 , and a memory unit 22, for support of remote processing. Figure 2 illustrates a display 4 of a mobile communication terminal 3 capable of text translation in accordance with the present invention, showing recognized text 5 in an extensive mode in accordance with a first embodiment. The use of a display 4 for presentation is preferred, but the invention is not restricted thereto. For instance, voice presentation can likewise be utilized through the speaker 15. Furthermore, the recognized text 5 in this example comprises Chinese characters, but any characters forming a language are likewise supported by the present invention.

A selected phrase 26 is highlighted, pointed out by a hand shaped cursor or in any other manner indicated in the recognized text 5, and is additionally shown in the left corner of the display 4. Further, a translation 27 of the selected phrase 26 is shown at the bottom of the display 4. Preferably, but not necessarily, the recognized text 5, the selected phrase 26 and the

translation 27 are all presented simultaneously. Their location on the display 4 in relation to one another in Figure 2 is merely exemplifying.

Figure 3, on the other hand, illustrates the display 4 of the mobile communication terminal 3 shown in Figure 2, showing the recognized text 5 in a target mode.

Similarly to the mobile communication terminal 3 in an extensive mode, is in the target mode described in Figure 3 a selected phrase 36 highlighted, pointed out by a hand shaped cursor or in any other manner indicated in the recognized text 5, and additionally shown in the left corner of the display 4. Further, a translation 37 of the selected phrase 36 is shown at the bottom of the display 4.

The operations of the extended mode and the target mode will be further discussed in the flowchart of Figure 4, which describes the operation of text translation in accordance with the first embodiment of the present invention. The operation is preferably implemented as software steps stored in a memory and executed in a CPU, for instance the memory 12 and CPU 9 of the terminal 3 in Figure 1 B. Alternatively, some steps, as described below, may likewise be performed remotely, as for instance by the memory 22 and CPU 19 in the server 18 connected to the electrical device 3 via the communication network 16 as shown in Figure 1 B.

In order to arrive at the extended or target mode, the initial steps of Figure 4 first need to be described.

In a first step 401 , a text image input is captured, i.e. text to be translated is acquired though the use of for instance the camera 8 of the mobile communication terminal 3 as shown in Figure 1.

In a second step 402, character recognition is performed on the captured text, in order to provide a recognized text 5 that can be divided into phrases, as shown in Figures 2 and 3. A phrase can comprise one of a single word, a plurality of consecutive words, a single character or a plurality of consecutive characters.

The user 2 is in step 403 prompted to select the extensive mode or the target mode. Which mode to choose is for the user 2 to decide depending on his/her needs with regards to translation of the recognized text 5. Selection is realized by for instance the user 2 maneuvering the keyboard, pointer or joystick 14, or perhaps by voice commands using the microphone 13.

The extensive mode is described with reference to the embodiment shown in Figure 2. This mode is aimed at users 2 confronted with text in a

foreign language to which they are essentially unfamiliar, and for which text they therefore need to have an extensive translation in order to grasp an understanding of the content.

Consequently, a linguistic analysis is, in step 501 , performed in respect to the recognized text 5. The analysis in step 501 may comprise backward or forward maximum matching or a combination of both. For backward phrase matching, text is analyzed from the right to the left, and for forward phrase matching, text is analyzed from the left to the right. Depending on which phrases are matched with one another, the meaning of the phrases may differ. This is particularly, although not exclusively, noticeable for text comprising Chinese characters, where thus the interpretation of one possible combination of characters may differ from the interpretation resulting from another combination. In order to cover all possible phrases, both backward and forward maximum matching is preferably performed, whereby the resulting phrase sets are combined and the redundant phrases in the matching results removed.

Furthermore, the linguistic analysis of step 501 may, but is not requested to, comprise either combination of different procedures and considerations such as rule-based word association and automatic object extraction of the text to be analyzed.

In Chinese-to-English translation, for instance, there is often as previously discussed a problem to identify which combination of characters could compose a valid unit (word/phrase) to be translated. Rule-based word association, however, supports finding the possible combination of the concurrent characters using context sensing and linguistic rules. In target mode, for instance, rule-based word association may be utilized in identifying the valid combination of characters whose position is nearest to the hand- shaped cursor on the display 4, which is then recognized as the selected phrase 36. Automatic object detection further supports extraction of the selected phrase 26, 36 to be translated. In target mode, for instance, the hand-shaped cursor gives knowledge about the position of the selected phrase 37, and a revised connect-component-based algorithm may be applied for object detection and segmentation. In case the selected phrase 37 is an isolated word, layout analysis may give the accurate block of the word, otherwise a relative region (e.g. a line of Chinese characters without splits) may be extracted.

Rule-based word association and automatic object extraction are however conventional techniques, described for instance in the co-pending application US11/552,348, and will consequently not be discussed further.

In step 502, the recognized text 5 is translated into the language of choice. The translation is performed on a phrase level, i.e. each phrase is individually translated. If several translation options are feasible for a phrase, they may all be provided.

As the linguistic analysis in step 501 in case of a fairly large recognized text 5 can be time consuming, the user 2 is preferably given notice to await completion of the processing.

The user 2 then selects, in step 503, a phrase of which he/she is interested in knowing the translation, defined as the selected phrase 26. Selection of a phrase is implemented with for instance the use of a joystick 14 comprised in the electrical device 3, with which joystick 14 the user 2 can navigate among the phrases of the recognized text 5.

Thereby, the operation of the specific steps 501 to 503 in extensive mode is described, and the procedure proceeds to step 404. Here, the corresponding translation of the selected phrase 26 is presented, and in Figure 2 this is implemented by showing the translation 27 on the display 4. If instead of the extensive mode, the target mode is selected in step

403, the flow proceeds to step 601.

The target mode is described with reference to Figure 3. This mode is aimed at users 2 confronted with text in a foreign language to which they are essentially familiar, and for which text they therefore do not need to have an extensive translation in order to grasp an understanding of the content. On the contrary, the user 2 is most likely interested in translating a target word or phrase, which he/she is unfamiliar with, and does not need to be bothered with the translation of the entire recognized text 5, which is the case in the extensive mode as described in the foregoing. Consequently, the user 2 selects, in step 601 , a phrase of which he/she is interested in knowing the translation, defined as the selected phrase 36. Selection of a phrase is implemented with for instance the use of a pointer 14 comprised in the electrical device 3, with which pointer 14 the user 2 can navigate among the phrases of the recognized text 5. A linguistic analysis can, in step 602, be performed in respect to the selected phrase 36. The analysis in step 602 may similarly to the analysis of step 501 comprise backward or forward maximum matching or a combination

of both, and if preferred, either combination of different procedures and considerations such as rule-based word association and automatic object extraction of the text to be analyzed.

In step 603, the selected phrase 36 is translated into the language of choice. If several translation options are feasible for the selected phrase 36, they may all be provided.

Note that the linguistic analysis in step 602 and the translation in step 603 are performed only in respect to the selected phrase 36. As the linguistic analysis 501 , 602 is the most time consuming step in the text translation procedure, less time is thus required for processing in target mode, as the processing involves only the selected phrase (36), not the entire recognized text (5) as in extensive mode.

Thereby, the operation of the specific steps 601 to 603 in target mode is described, and the procedure proceeds to step 404. Here, the corresponding translation of the selected phrase 36 is presented, and in Figure 3 this is implemented by showing the translation 37 on the display 4.

Regardless of which mode the user 2 has selected in step 403, he/she is able to navigate the phrases of the recognized text 5. The user 2 can subsequently alter between the phrases selecting a new selected phrase 26, 36, as shown in step 405. By for instance stepping through the phrases in extensive mode using the joystick 14, or for instance pointing out different phrases in target mode using the pointer 14, the user 2 is continuously given the corresponding applicable translation options 27, 37 of the current selected phrase 26, 36. In step 406, if the user 2 has selected a new phrase 26, 36 in step 405, it is determined whether extensive mode or target mode was selected in step 403. In case of extensive mode, the flow returns to step 404, presenting the corresponding translation 27 of the new selected phrase 26. In case of target mode, the flow returns to step 602 for linguistic analysis in respect to the selected phrase 36 and translation of the selected phrase 36 in step 603, before presentation of the corresponding translation 37 of the selected phrase 36.

The steps of the extensive mode as well as the target mode are in the example performed locally within the electrical device 3. The present invention is however not restricted thereto, the linguistic analysis step 501 , 602 and/or the translating step 502, 603 may likewise be performed remotely,

as for instance in a server 18 connected to the electrical device 3 via the communication network 16, as shown in Figure 1 B.

Furthermore, according to a second embodiment (not shown) of the present invention, the translation can be presented on a sentence level. Instead of translation of single phrases one by one, giving a preliminary understanding, a translation of the sentence comprising the selected phrase 26, 36 will, if presentation on a sentence level if chosen by the user 2, be presented. In order to support this functionality in target mode, the linguistic analysis in step 602 needs to be performed in respect to the recognized text 5, and translation in step 603 needs to be performed on the sentence comprising the selected phrase 26.

For optimum translation on a sentence level, a language model is preferably utilized for interpretation. The language model reforms phrases into an understandable sentence, and a translation of the sentence comprising the selected phrase 26, 36 can be presented.

Hereby, an active shoot-to-translate solution for text analysis and phrase or sentence translation in a mobile communication terminal 3 has been described in accordance with a first and second embodiment of the present invention. It should however be appreciated by those skilled in the art that several further alternatives are possible. For example, the features of the different embodiments discussed above may naturally be combined in many other ways.

It will be appreciated by those skilled in the art that several such alternatives similar to those described above could be used without departing from the spirit of the invention, and all such modifications should be regarded as a part of the present invention, as defined in the appended claims.