Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VISUAL LANGUAGE INTERPRETATION SYSTEM AND USER INTERFACE
Document Type and Number:
WIPO Patent Application WO/2020/163185
Kind Code:
A1
Abstract:
Methods, apparatus and systems for a sign language recognition are disclosed. One example of a sign language recognition device includes a primary display facing a first direction, a secondary display facing a second direction, and one or more cameras positioned adjacent the secondary display and facing the second direction. An image captured by the one or more cameras is displayed on at least a portion of the primary display. The device also includes a support stand fixed relative to the secondary display and the one or more cameras. The support stand includes a pair of support arms each carrying a pair of pivot pins and the primary display is pivotably and slideably coupled to the pivot pins, whereby the device is configurable between a folded configuration and an unfolded configuration such that the first and second directions face opposite each other when the device is in the folded configuration.

Inventors:
MENEFEE MICHAEL (US)
NASH DALLAS (US)
CHANDLER TREVOR (US)
Application Number:
PCT/US2020/016271
Publication Date:
August 13, 2020
Filing Date:
January 31, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AVODAH INC (US)
International Classes:
G06F3/01; H04N5/225
Domestic Patent References:
WO2015191468A12015-12-17
Foreign References:
US20060204033A12006-09-14
US20150324002A12015-11-12
US20110301934A12011-12-08
US20150244940A12015-08-27
Attorney, Agent or Firm:
TEHRANCHI, Babak et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A sign language recognition device, comprising:

a primary display facing a first direction;

a secondary display facing a second direction;

one or more cameras positioned adjacent the secondary display and facing the second direction, wherein an image captured by the one or more cameras is displayed on at least a portion of the primary display; and

a support stand fixed relative to the secondary display and the one or more cameras, the support stand comprising a pair of support arms each carrying a pair of pivot pins;

wherein the primary display is pivotably and slideably coupled to the pivot pins, whereby the device is configurable between a folded configuration and an unfolded

configuration such that the first and second directions face opposite each other when the device is in the folded configuration.

2. The sign language recognition device of claim 1, wherein the primary display includes two pairs of grooves each pair positioned on opposite sides of the display to receive corresponding pairs of the pivot pins.

3. The sign language recognition device of claim 2, wherein each pair of grooves includes one groove that is longer than the other.

4. The sign language recognition device of claim 3, wherein each pair of grooves converge at one end.

5. The sign language recognition device of claim 1, wherein the image captured by the one or more cameras is displayed on at least a portion of the secondary display.

6. A sign language recognition device, comprising:

a primary display facing a first direction;

a secondary display facing a second direction;

one or more cameras positioned adjacent the secondary display and facing the second direction; and

a support stand fixed relative to the secondary display and the one or more cameras; wherein the primary display is pivotably coupled to the secondary display via the support stand, whereby the device is configurable between a folded configuration and an unfolded configuration.

7. The sign language recognition device of claim 6, wherein the first and second directions face opposite each other when the device is in the folded configuration.

8. The sign language recognition device of claim 6, wherein the primary display is slideably coupled to the secondary display.

9. The sign language recognition device of claim 6, wherein the support stand carries at least one pivot feature about which the primary display pivots.

10. A computer implemented method for visual language interpretation, the method comprising:

displaying source text in a first area of a primary display;

displaying the source text on a secondary display;

receiving video data of a sign language speaker signing an interpretation of the source text displayed on the secondary display;

displaying the video data in a second area of the primary display;

translating the interpretation of the source text into translation text;

displaying the translation text on the primary display; and

logging the translation text.

11. The method of claim 10, further comprising recording the video data.

12. The method of claim 10, wherein displaying the translation text on the primary display comprises overlaying the translation text on the second area of the primary display.

13. The method of claim 10, wherein displaying source text in a first area of a primary display comprises displaying a current source text and a next source text.

14. The method of claim 13, further comprising detecting when the current source text has been translated and scrolling to the next source text.

15. The method of claim 10, further comprising:

receiving audio data corresponding to a user’s spoken word; translating the audio data into spoken text; and

displaying the spoken text on a secondary display.

16. The method of claim 15, wherein displaying the translation text on the primary display comprises overlaying the translation text on the video data displayed on the primary display.

17. The method of claim 15, further comprising logging the translation text and the spoken text.

Description:
VISUAL LANGUAGE INTERPRETATION SYSTEM AND USER INTERFACE

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This patent application claims priority to and the benefits of U.S. Patent Application

No., entitled “VISUAL LANGUAGE INTERPRETATION SYSTEM AND USER INTERFACE,” filed February 7, 2019, the disclosure of which are incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] This document generally relates to devices to enable communications, and more particularly to a device and user interface for communication that includes patterns or gestures.

BACKGROUND

[0003] Machine assisted interpersonal communication has simplified both business and personal communications, and has enabled the source and receiver of a communication to be separated in both time and space. Devices for machine assisted interpersonal communication range from the simple answering machine to smartphone-based translation systems that can interpret a language (e.g., French) and translate it into another language for the smartphone user (e.g., spoken or written English).

[0004] One specific application of machine assisted interpersonal communication is sign language translation. A sign language (also known as signed language) is a language that uses manual communication to convey meaning, ideas and thoughts, which simultaneously employs hand gestures, movement, orientation of the fingers, arms or body, and facial expressions to convey a speaker's ideas. The complexity of sign language may be captured, in part, by using multiple input and output modalities for its translation and communication.

SUMMARY

[0005] Disclosed are devices, systems and methods for Visual Language Interpretation Systems (VLIS) using multiple input and output modalities that can be used to capture and process images for various applications, including automated sign language translation and communication.

[0006] In one aspect of the disclosed technology, a sign language recognition device can include a primary display facing a first direction, a secondary display facing a second direction, and one or more cameras positioned adjacent the secondary display and facing the second direction. A support stand can be fixed relative to the secondary display and the one or more cameras, wherein the primary display is pivotably coupled to the secondary display via the support stand. Thus, the device is configurable between a folded configuration and an unfolded configuration. In some embodiments, the support stand carries at least one pivot feature about which the primary display pivots.

[0007] In another aspect of the disclosed technology, a sign language recognition device can include a primary display facing a first direction, a secondary display facing a second direction, and one or more cameras positioned adjacent the secondary display and facing the second direction. An image captured by the one or more cameras can be displayed on at least a portion of the primary display. A support stand can be fixed relative to the secondary display and the one or more cameras. The support stand can include a pair of support arms each carrying a pair of pivot pins, wherein the primary display is pivotably and slideably coupled to the pivot pins. Thus, the device is configurable between a folded configuration and an unfolded configuration such that the first and second directions face opposite each other when the device is in the folded configuration.

[0008] In yet another aspect, the primary display includes two pairs of grooves each pair positioned on opposite sides of the display to receive corresponding pairs of the pivot pins. In some embodiments, each pair of grooves includes one groove that is longer than the other. In some embodiments, each pair of grooves converge at one end. In some embodiments, the image captured by the one or more cameras is displayed on at least a portion of the secondary display.

[0009] In yet another aspect, the disclosed technology may be used to recognize a sign language communicated by a subject. This can include computer implemented methods for visual language interpretation. In some embodiments, the method includes displaying source text in a first area of a primary display as well as displaying the source text on a secondary display facing a sign language speaker. The method can include receiving video data of the sign language speaker signing an interpretation of the source text displayed on the secondary display. The video data can be displayed in a second area of the primary display for viewing by a user. The sign language speaker’s interpretation of the source text can be translated into translation text for display on the primary display for viewing by the user (e.g., to verify the accuracy of the translation). In some embodiments, the translation text is logged for further review and/or incorporation with the source text and recorded video.

[0010] In yet another aspect, the method can further include recording the video data. In some embodiments, displaying the translation text on the primary display includes overlaying the translation text on the second area of the primary display. In some embodiments, displaying source text in a first area of a primary display includes displaying a current source text and a next source text. In some embodiments, the method can further include detecting when the current source text has been translated and scrolling to the next source text.

[0011] In yet another aspect, the disclosed technology may be used to facilitate communication between a sign language speaker and a non-sign language speaker. This can include computer implemented methods for visual language interpretation and spoken language interpretation. In some embodiments, the method includes receiving video data of a sign language speaker signing and displaying the video data on a primary display. The video data of the sign language speaker signing can be translated into translation text and displayed on the primary display for viewing by a user. The method can also include receiving audio data corresponding to the user’s spoken words and translating the audio data into spoken text, which is displayed on a secondary display for viewing by the sign language speaker.

[0012] In yet another aspect, displaying the translation text on the primary display includes overlaying the translation text on the video data displayed on the primary display. In some embodiments, the method can further include logging the translation text and the spoken text.

[0013] In yet another aspect, an apparatus comprising a memory and a processor can implement the above-described methods. In a further aspect, the above described methods may be embodied as processor-executable code and may be stored on a non-transitory computer- readable program medium. The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] FIG. 1A illustrates an example view of a device for sign language recognition with multiple input and output modalities.

[0015] FIG. IB illustrates another example view of a device for sign language recognition with multiple input and output modalities.

[0016] FIG. 1C illustrates another example view of a device for sign language recognition with multiple input and output modalities.

[0017] FIG. ID illustrates another example view of a device for sign language recognition with multiple input and output modalities.

[0018] FIG. IE illustrates an example side view of a device for sign language recognition with multiple input and output modalities. [0019] FIG. IF illustrates another example side view of a device for sign language recognition with multiple input and output modalities.

[0020] FIG. 1G illustrates another example view of a device for sign language recognition with multiple input and output modalities.

[0021] FIG. 2 illustrates an exploded view of the yet another example device for sign language recognition using a device with multiple input and output modalities.

[0022] FIG. 3 illustrates a basic user interface screen in accordance with an example embodiment of the disclosed technology.

[0023] FIG. 4 illustrates a user interface menu.

[0024] FIG. 5 illustrates a user interface for acquiring translation source content.

[0025] FIG. 6 illustrates a user interface for video editing of recorded content.

[0026] FIG. 7 illustrates a user interface for translating content.

[0027] FIG. 8 illustrates a user interface for playback of video and translation content.

[0028] FIG. 9 illustrates a user interface for publishing video content to a server.

[0029] FIG. 10 illustrates a user interface for reading translation text.

[0030] FIG. 11 illustrates a user interface for viewing video content.

[0031] FIG. 12 illustrate an example view of a user interface interaction of a translation and recording session.

[0032] FIG. 13 illustrate another example view of a user interface interaction of a translation and recording session.

[0033] FIG. 14 illustrate another example view of a user interface interaction of a translation and recording session.

[0034] FIG. 15 illustrate another example view of a user interface interaction of a translation and recording session.

[0035] FIG. 16 illustrate another example view of a user interface interaction of a translation and recording session.

[0036] FIG. 17 illustrate another example view of a user interface interaction of a translation and recording session.

[0037] FIG. 18 illustrate another example view of a user interface interaction of a translation and recording session.

[0038] FIG. 19 illustrate another example view of a user interface interaction of a translation and recording session. [0039] FIG. 20 illustrate another example view of a user interface interaction of a translation and recording session.

[0040] FIG. 21 illustrate another example view of a user interface interaction of a translation and recording session.

[0041] FIG. 22 illustrate another example view of a user interface interaction of a translation and recording session.

[0042] FIG. 23 illustrate another example view of a user interface interaction of a translation and recording session.

[0043] FIG. 24 illustrates a flowchart of a high level example order of operations for acquiring and publishing visual language translation content.

[0044] FIG. 25 illustrates a flowchart of an example method for translating a source text into sign language.

[0045] FIG. 26 illustrates a flowchart of an example method for facilitating communication between a sign language speaker and a non-sign language speaker.

DETAILED DESCRIPTION

[0046] Machine-assisted interpersonal communication (or technology-assisted communication) involves one or more people communicating by means of a mechanical or electronic device or devices with one or more receivers. The devices that are used can give the communication permanence (e.g., storage devices) and/or extend its range (e.g., wireless communication) such that the source and receiver can be separated in time and space.

[0047] One specific application of using devices for machine-assisted interpersonal communication is sign language communication and translation. Sign languages are extremely complex, and generally do not have a linguistic relation to the spoken languages of the lands in which they arise. The correlation between sign and spoken languages is complex and varies depending on the country more than the spoken language. For example, the US, Canada, UK, Australia and New Zealand all have English as their dominant language, but American Sign Language (ASL), used in the US and English-speaking Canada, is derived from French Sign Language whereas the other three countries sign dialects of British, Australian, and New Zealand Sign Language (collectively referred to as BANZSL). Similarly, the sign languages of Spain and Mexico are very different, despite Spanish being the national language in each country.

[0048] Furthermore, unlike spoken languages, in which grammar is expressed through sound-based signifiers for tense, aspect, mood, and syntax, sign languages use hand movements, sign order, and body and facial cues to create grammar. In some cases, even certain uttered sounds or clicks may form a part of the sign language. Such a cue is referred to as a non-manual activity and can vary significantly across different sign languages. It is desirable for a sign- language translation system to capture and process both the hand movements and the non-manual activities to provide an accurate and natural translation for the parties.

[0049] Embodiments of the disclosed technology that are implemented for sign language translation are flexible and adaptable in that an input sign language, which can be any one of several sign languages, is converted to an internal representation, which can then be used to translate the input sign language into one or more of a variety of output sign languages. Furthermore, the embodiments described in this document employ a multiplicity of different sensors and processing mechanisms to be able to capture and process information that may not be obtainable when a single sensor or process is used, and to facilitate accurate capture, processing and interpretation of the information to allow translation between different sign languages. In an example, the Bible may be translated from any language to a particular sign language, or from one sign language representation to another, based on the embodiments disclosed in this document. In general, any textual, audible or sign language content may be translated in real-time to corresponding content in another audible, textual or sign language.

[0050] FIGS. 1-26 are illustrations offered to provide the proper context for the specific application of a sign language translation system that can benefit from the training techniques described in later sections of this document.

[0051] FIGS. 1 A- 1G illustrate examples of device 100 for sign language recognition using a device with multiple input and output modalities. FIG. 1 A illustrates the primary display screen 102 of the exemplary device that enables the user of the device to communicate with a signer. The device 100 can include one or more knobs 112 positioned proximate the top of the primary screen 102. In some examples, the knobs 112 can each include an axially actuated button 114. These knobs and buttons 112/114 can be configured to provide functionality and control of the primary 102 (FIG. 1A) and/or secondary 104 (FIG. IB) screens. For example, they can be configured to control scrolling, selecting, volume, screen brightness, playback speed and direction, and the like.

[0052] FIG. IB illustrates the secondary display screen 104 that faces the signer. Accordingly, the primary and secondary screens 102/104 may be positioned on opposite sides of the device 100 and face outwardly therefrom. When the device 100 is in a folded (e.g., closed) configuration, as illustrated in e.g., FIGS. 1A and IB, the primary display 102 faces a first direction and the secondary display faces a second direction opposite the first direction. In some examples, the camera(s) and audio sensors 210 (see FIG. 2) are positioned above the secondary screen 104 in a sensor module 116 and behind a sensor window 118. Thus, the camera(s) and audio sensors are positioned to face the signer for recording the signer’s utterings, and environmental and background sounds and images, which can be displayed or otherwise presented to the viewer through, for example, the primary display 102. A support stand 108 can be attached to the secondary screen 104 and sensor module 116 via an upper frame member 120. In some examples, the knobs 112 can be mounted to the upper frame member 120.

[0053] FIGS. 1C and ID illustrate the device 100 in an unfolded (e.g., open or partially open) configuration, which allows the device 100 to be placed on a flat surface using the stand 108. The primary screen 102 slides down and folds or pivots away from the stand 108 such that the primary screen 102 is angled relative to the stand 108, secondary screen 104, and upper frame member 120. Thus, in the unfolded configuration, the device 100 rests on the bottom of the primary screen 102 (or the bottom portion of a frame or a cover that accommodates the primary screen 102) and the stand 108. FIG. 1C further illustrates an example user interface 106 on the primary screen 102 of the device 100 and FIG. ID illustrates the corresponding user interface 110 on the secondary screen 104 facing the signer.

[0054] FIGS. 1E-1G illustrate the sign language recognition device 100 with side covers 122 and 124 removed on one side to illustrate the sliding/pivoting mechanism. As illustrated in FIG. IE, the primary display 102 is slideably and pivotably coupled to the secondary display 104 via the support stand 108. In some examples, the support stand 108 can comprise a pair of support arms 126 each attached to the upper frame member 120 at a first end and to a transverse support member 128 at a second end. Each support arm 126 can include a tab 130 projecting away from the arm to carry a pair of pivot/slide features, such as pins 132/134, to control the sliding and pivoting motion of the primary display 102. These pins 132/134 engage corresponding grooves 136/138 formed, respectively, in a groove block 142 and a display mount 140. Although the examples herein are described with respect to pins and grooves, other suitable cooperative moving features, and specifically other types of pivoting/sliding features can be used, such as bearings, ramped or cam surfaces, wheels, linkages, and the like. In some examples, the pins and grooves can be switched such that the grooves are carried by the support arms and the pins are located on the display mount.

[0055] FIGS. IF and 1G both illustrate the support arm 126 as being transparent to better illustrate the pins 132/134 and corresponding grooves 136/138. The grooves 136/138 are generally parallel to each other. However, groove 138 is longer than groove 136 and groove 136 includes an arcuate portion 137 (FIG. IF). In some examples, the groove 136 can be a straight groove that is angled with respect to groove 138. As the primary screen 102 is slid toward the upper frame member 120, the pins 132 and 134 slide in their respective grooves 136 and 138. As the pins approach the end of their travel in the grooves, pin 132 is forced closer to groove 138 due to the arcuate portion 137. In other words the grooves 136/138 converge at one end. This encourages the primary display 102 to pivot with respect to the pins. The extended length of groove 138 allows the primary display to fully pivot and toggle (e.g., lock) into the closed configuration as illustrated in FIG. 1G. It should be appreciated that one or more of the groove block 142 and display mount 140 (FIG. IE) can be formed from a resilient material such as plastic, whereby detents can be formed at the opposite ends of the grooves to help hold the display in the folded and/or unfolded configurations.

[0056] FIG. 2 illustrates an exploded view of yet another example device for sign language recognition using a device with multiple input and output modalities. As illustrated therein, the device includes a 3D camera and dedicated hardware video and audio sensors 210, a secondary screen for multi-modal communication with sign language speakers 220, a slide-out stand and dedicated hardware controls for tabletop or tripod use 230, a multi-stage, dedicated hardware AI pipeline with multiple ARM cores and capable of on-board real-time processing 240, a purpose- built user interface for sign language translation capable to simultaneous multi-modal interaction 250, and a primary screen 260.

[0057] In an example, the knob 233 is mechanically used to provide a pivot for the slide-out stand. However, in some embodiments, the knob may be configured to provide functionality for the user of the primary screen. In an example, it may be configured to enable scrolling through information or logs on the primary screen. In another example, it may be configured to assist in playback (to fast-forward or rewind the video being played).

[0058] As described above, using multiple apertures increases fidelity so as to enable the high-quality reproduction of the movement. This allows additional information for each pixel to be captured, which can be used to create unique feature signatures for the different movements of the sign language. The features may be leveraged to identify the movements in the subsequent processing stage. In an example, a feature signature may be the right hand of the subject moving horizontally within a particular 3D volume in a particular amount of time. Features such as these, in combination with other sign language movements and the subject’s emotions, may be mapped onto an interpretation of the sign language. [0059] For example, the feature signatures from each of these different modalities may be combined through a point-cloud model, or a multi-camera, or multi-frame 3D model construction algorithms or artificial intelligence (e.g., DNNs, CNNs) programs, which enables more accurate and robust recognition. As expected, increasing the number of feature signatures used results in an increase in the training set as well as the recognition network. In general, the more unique/differentiated information is captured, the greater the accuracy (in statistical terms) of distinguishing one feature from another. The use of multiple apertures increases the amount of non-redundant data that is captured by the system.

[0060] FIGS. 3-23 illustrate various features of a user interface (UI) in accordance with an example embodiment of the disclosed technology. The UI can be displayed on a primary display, such as primary display 102 (e.g., FIG. IE) or primary screen 260 (FIG. 2). The UI can be organized as a use case task order flow or order of operations. In some embodiments, the UI can include, for example, an Acquire page, an Edit page, a Translate page, a Review page, and a Publish page. Each of these pages or screens can have the same basic layout.

[0061] FIG. 3 illustrates a basic user interface screen 300 showing some features that are common to all of the UI pages. For example, the basic screen 300 can include a menu button 302, a page title 304, a page content area 306, and a logo/status/notification area 308.

[0062] FIG. 4 illustrates a user interface menu 310 accessible by selecting the menu button 302 (FIG. 3). In some embodiments, the interface menu 310 can include: Acquire, Edit, Translate, Review, Publish, Viewer, Reader, Help, and Settings selection options, for example. Selection of one of the selection options will display a user interface page for that option. The page content area 306 can be obscured or blurred during menu selection. Selection of various screen contents (e.g., menu button 302) can be accomplished with a mouse, touch screen control, voice command, and/or knobs and buttons 112/114 (FIG. 1A). For example, one of the knobs 112 can be used to scroll through the interface menu options 310 and the desired selection can be made by pushing one of the buttons 114.

[0063] FIG. 5 illustrates a user interface Acquire page 312 for acquiring translation content. The Acquire page 312 can display translation source content, record video content, and provide an initial display of a translation of the video content. The page content area of the Acquire page 312 can be divided into a video display area 314, a control area 316, a translation history area 318, and recording controls area 320. The video display area 314 can display video of the SL speaker (i.e., signer), an avatar of the SL speaker, text version of the translated text, and/or system status. The control area 316 can be configured as a tab view for selecting between multiple control panels, such as source selection and display, video controls, and AI translation controls. The translation history area 318 can display a history of the translated text and/or conversations between the SL speaker and the system user. The translation history area 318 can be configured as a“chat” view. The recording controls area 320 can include recording control buttons (e.g., record and pause) along with video timecodes and total time displays. In some embodiments, one of the knobs 112 can be used to scroll through the various available sources to be translated and/or scroll through the translation history. An example of a user interface interaction of a translation/recording session using the Acquire page 312 is further described below with respect to FIGS. 12-23.

[0064] FIG. 6 illustrates a user interface Edit page 322 for video editing of recorded content (e.g., clips) to include in a final video sequence. An input sequence video window 324 can be used to mark in/out crops on recorded video sequence(s) 330 (i.e., source(s)), creating clips which are then arranged in the output track 332. Source text 334 corresponds to the clips arranged in the output track 332. An output sequence video window 326 plays clips in order and renders to a single output track 332 used for translation on a Translate page 336 (FIG. 7). Horizontal scrolling of the track interface is accomplished with a sliding time bar 328. In some embodiments, the sliding time bar 328 can be controlled with one or the other of knobs 112.

[0065] FIG. 7 illustrates a user interface Translate page 336 for translating and editing the content. The source text 334 and video output track 332 (e.g., video frames or strip) are displayed with a translation 338 of the source text as well as a translation into sign gloss 340. A sign gloss is a transcription of a sign transcription including various notations to account for the facial and body grammar that goes with the signs. Clicking or tapping the source text 334, the translation 338, or sign gloss 340 allows editing of those contents. Each of the source text 334, the translation 338, or sign gloss 340 can be displayed as timeline bars. Dragging the handles on the time line bars can change the association between video (timestamps) and other elements. In contrast to the sliding time bar 328 of the Edit page 322, the Translate page 336 can use a fixed time bar 342 whereby e.g., horizontal finger movement on the video strip 332 or other lanes (i.e., source text 334, translation 338, and sign gloss 340) scrubs all of the lanes left or right. In some embodiments, scrubbing the video strip 332 and other lanes can be controlled with one or the other of knobs 112.

[0066] FIG. 8 illustrates a user interface Review page 344 for playback of video content and any available translation content. The page content area of the Review page 344 can be divided into a sequence video area 346, a translated text area 348, a playback controls area 350, and a source text area 352. The sequence video area 346 can display video from the video output track 332. The source text area 352 can display the source text and the translated text area 348 can display the translation.

[0067] FIG. 9 illustrates a user interface Publish page 354 for uploading and publishing video content to a server. The page content area of the Publish page 354 can be divided into an Output sequence detail/information form area 356, a file list/preview area 358, and a keyboard 360. In some embodiments, scrolling through the file list 358 can be controlled with one or the other of knobs 112.

[0068] FIG. 10 illustrates a user interface Reader page 362 for reading translation text. The page content area of the Reader page 362 can be divided into a left translation text area 364 and right translation text area 366. A common scrollbar 368 (i.e., both views scroll together) can be used to scroll through the translations. For example, the left and right translation text can be selected from any available language, e.g., left translation language can be English and the right translation language can be Spanish. In some embodiments, the common scrollbar 368 can be controlled with one or the other of knobs 112.

[0069] FIG. 11 illustrates a user interface Viewer page 370 for viewing video content. The page content area of the Viewer page 370 can be divided into a video content area 372 and playback controls 376. A horizontal progress scrub bar 374 can be used to navigate through the video playback. In some embodiments, one or the other of knobs 112 can be used to navigate (e.g., control scrub bar 374) through the video.

[0070] FIGS. 12-23 illustrate an example user interface interaction of an acquisition process to record and translate video content using an Acquire page 412 similar to that described above with respect to FIG. 5. The process is described from the perspective of a primary user e.g., a consultant directing the recording process to acquire SL translation of textual scripture, for example. In this example, the primary user is viewing the primary display screen 102 (FIG. 1 A) and a subject, e.g., SL speaker, is viewing the secondary display screen 104 (FIG. IB) and signing text for recorded video, via the cameras, to compile a complete SL translation.

[0071] With initial reference to FIG. 12, a diagram D illustrating the flow and mode of information between the user and the signer is provided to the right of the Acquire page 412 in each of FIGS. 12-23 in order to aid the reader’s understanding of the function of the Acquire page 412 and the associated content acquisition process. These diagrams do not form a part of the user interface in the depicted embodiment. However, in other embodiments, such diagrams can be incorporated into the user interface. [0072] After opening or starting a new project, the project session begins with a blank video display area 414 - the user has not yet selected a text to work with, and the system has not initialized any video sequence storage. The translation history area 418 is also blank or empty at this point. In the depicted embodiment, the control area 416 is configured as a tab view for selecting between Source, Image, and AI System.

[0073] The Source tab 422 is selected and“Edit”“Select...” is selected to open a Source window 424 as shown in FIG. 13. The user selects John 3 :9 of the English ASV from the list as the starting point for the source text. This selection can be accomplished with a voice command (as indicated in diagram D), but can also be accomplished via the touch screen or a pointing device, for example.

[0074] As shown in FIG. 14, the system is now ready to record a video sequence. The Source tab 422 indicates the text selection Start and End points. The tab also indicates the Current and upcoming (i.e., Next) text selections to be translated. In some embodiments, the video display area 414 includes a Picture-in-picture (PIP) display 426 of the video being output to the signer facing screen 104 (FIG. IB).

[0075] With reference to FIG. 15, the Acquire page 412 can not only record an SL translation of a selected source text, but can also include real-time or near real-time two-way translation between the primary user and the signer for the purpose of communication/collaboration between the primary user and the signer. For example, as shown in translation history area 418, the user prompts the SL speaker to begin at verse 9 with a slightly different“tone” than a previous session. The system translates the user’s spoken audio to text and SL avatar video which is displayed on the signer-facing screen 104 (FIG. IB) and in the PIP 426.

[0076] As shown in FIG. 16, the SL speaker asks a confirmation question via sign language, which is translated to text that is momentarily shown over the video in real-time and also stored in the history feed 418 on the right hand side of the screen. As shown in FIG. 17, the user responds affirmatively (i.e.,“exactly”) to the SL speaker, which is again translated and shown to the SL speaker. The system remains in a two-way translation mode until either the user or the signer starts the recording process.

[0077] With reference to FIG. 18, recording can be initiated by the user via the recording controls 420, by voice command, or the SL speaker can perform a“Record Start” gesture. Once recording is started the system begins recording video, translating signs, and aligning with the source text. [0078] As shown in FIG. 19, the SL speaker signs the first verse displayed. The system translates this content and logs the translation in the history feed. As shown in FIG. 20, recognizing the previous verse has been signed, the system moves the“Next” prompt up to the current verse and displays the next verse in the sequence. The system translates the signs, displays a real-time output to the user, and logs the translation in the history feed.

[0079] Referring to FIG. 21, the SL speaker performs a“Pause” gesture and the system pauses video recording. The system re-activates two-way translation mode at this point. As shown in FIG. 22, the SL speaker asks the user for input on the video which was just captured. This is translated into text and audio for the user. The user responds affirmatively and suggests they both take a break. This is translated to the user via avatar and text as before (FIGS. 15-17).

[0080] As shown in FIG. 23, the user uses a voice command to stop the session (or pushes the stop button). The system closes out the video sequence and saves the project file. The system is now ready to begin a new session or change to a different mode (e.g., edit, translate, etc.).

[0081] FIG. 24 illustrates a flowchart of a high level example order of operations (or task order flow) 500 for acquiring and publishing visual language translation content. As noted above with respect to FIG. 3, the pages of the UI can be organized according to this task order flow. In some embodiments, the method 500 can include, an Acquire operation 510. The Acquire operation 510 can include displaying content in the form of a source text (e.g., Bible verses) for translation by a sign language speaker. This operation can also include recording the sign language speaker as translation is performed. In some embodiments, the Acquire operation 510 can include an initial display of a translation of the video content. The method 500 can also include an Edit operation 520. In edit operation 520, recorded video clips and corresponding source text can be edited to produce a final video sequence. The method 500 can further include a Translate operation 530. The translate operation 530 can include displaying a final video sequence, source text, translated text, and sign gloss labels for refinement of the translated content. In some embodiments, the method 500 can include a Review operation 540 wherein video content with source text and any available translation content can be played back for final review. The method 500 can also include a Publish operation 550 wherein the final video and translation content is uploaded to a server, for example.

[0082] FIG. 25 illustrates a flowchart of an example method 600 for translating a source text into sign language. The method 600 can include, at operation 610, displaying a selected source text in a first area of a primary display as well as displaying the source text on a secondary display facing a sign language speaker at operation 620. For example, the source text can be displayed in a source tab of control area 316 (FIG. 5) of the primary display 102 (FIG. 1A) and on the secondary display 104 (FIG. IB). The method can include, at operation 630, receiving video data of the sign language speaker signing an interpretation of the source text displayed on the secondary display. The video data can then be displayed in a second area of the primary display, at operation 640, for viewing by a user. For example, the video data can be displayed in the video display area 314 (FIG. 5). The sign language speaker’s interpretation of the source text can be translated into translation text, at operation 650, for display on the primary display, at operation 660, for viewing by the user (e.g., to verify the accuracy of the translation) (see e.g., FIG. 19). In some embodiments, the translation text is logged at operation 670. For example, the translation text can be logged in the translation history area 318 (FIG. 5).

[0083] In some embodiments, the method can further include recording the video data via recording controls 320 (FIG. 5), voice command, and/or a signed gesture. In some embodiments, displaying the translation text on the primary display includes overlaying the translation text on the second area of the primary display (see e.g., FIG. 19). In some embodiments, displaying source text in a first area of a primary display includes displaying a current source text and a next source text (see e.g., FIG. 19). In some embodiments, the method can further include detecting when the current source text has been translated and scrolling to the next source text (see e.g., FIGS. 19-21).

[0084] FIG. 26 illustrates a flowchart of an example method 700 for facilitating communication between a sign language speaker and a non-sign language speaker (see e.g., FIGS. 15-17). The method 700 can include, at operation 710, receiving video data of a sign language speaker signing, and displaying the video data on a primary display at operation 720. For example, the video data can be displayed in the video display area 314 (FIG. 5). The video data of the sign language speaker signing can be translated into translation text, at operation 730, and displayed on the primary display for viewing by a user at operation 740 (see e.g., FIG. 16). The method can also include receiving audio data corresponding to a user’s spoken word, at operation 750, and translating the audio data into spoken text at 760. The spoken text can then be displayed on a secondary display for viewing by the sign language speaker at operation 770 (see e.g., FIG. 17).

[0085] In some embodiments, displaying the translation text on the primary display includes overlaying the translation text on the video data displayed on the primary display (see e.g., FIG. 16). In some embodiments, the method can further include logging the translation text and the spoken text. For example, the text can be logged in the translation history area 418 (FIG. 17). [0086] Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of at least some of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term“data processing unit” or“data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

[0087] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

[0088] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). [0089] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[0090] While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

[0091] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.

[0092] Only a few implementations and examples are described and other

implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.