Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICES AND METHODS FOR DISPLAYING CHARACTERS
Document Type and Number:
WIPO Patent Application WO/2018/224152
Kind Code:
A1
Abstract:
The invention relates to an electronic device (201) for displaying characters, wherein the electronic device (201) comprises: a communication interface (203) for receiving a character code, wherein the character code represents a character of a plurality of characters based on a separation of the character into a subset of a set of character elements; a processor (205) configured to generate a 2D image of the character on the basis of the character code and the set of character elements; and an electronic display (207) configured to display the 2D image of the character.

Inventors:
CHU YUN YAW (DE)
EL SHABRAWY KARIM (DE)
Application Number:
PCT/EP2017/064010
Publication Date:
December 13, 2018
Filing Date:
June 08, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
CHU YUN YAW (DE)
International Classes:
G06F17/22; G06F17/21
Foreign References:
US20080068383A12008-03-20
US20170097921A12017-04-06
JPH10119357A1998-05-12
Other References:
None
Attorney, Agent or Firm:
KREUZ, Georg (DE)
Download PDF:
Claims:
CLAIMS

1. An electronic device (201 ) for displaying characters, the electronic device (201 ) comprising: a communication interface (203) for receiving a character code, wherein the character code represents a character of a plurality of characters based on a separation of the character into a subset of a set of character elements; a processor (205) configured to generate a 2D image of the character on the basis of the character code and the set of character elements; and an electronic display (207) configured to display the 2D image of the character. 2. The electronic device (201 ) of claim 1 , wherein the character code represents the character of the plurality of characters based on the separation of the character into the set of character elements by comprising information for identifying the subset of the set of character elements, information about a respective position of each character element of the subset of the set of character elements and/or information about a respective width and/or height of each character element of the subset of the set of character elements.

3. The electronic device (201 ) of claim 2, wherein the processor (205) is further configured to resize the character. 4. The electronic device (201 ) of any one of the preceding claims, wherein the electronic device (201 ) further comprises a memory (209) configured to store the set of character elements.

5. The electronic device (201 ) of any one of the preceding claims, wherein the electronic device (201 ) is an electronic client device (201 ) paired with an electronic server device (251 ), wherein the processor (205) is configured to generate the 2D image of the character on the basis of the character code and a selected set of character elements, wherein the selected set of character elements is selected by the electronic server device (251 ) from a plurality of sets of character elements and provided to the electronic client device (201 ), in response to a request for a set of character elements from the electronic client device (201 ).

6. The electronic device (201 ) of claim 5, wherein the request for a set of character elements includes information about the hardware capabilities of the electronic client device (201 ). 7. The electronic device (201 ) of any one of the preceding claims, wherein the plurality of characters comprise CJK characters and wherein the set of character elements comprises CJK strokes.

8. A method (1900) for displaying characters, the method comprising: receiving (1901 ) a character code, wherein the character code represents a character of a plurality of characters based on a separation of the character into a subset of a set of character elements; generating (1903) a 2D image of the character on the basis of the character code and the set of character elements; and displaying (1905) the 2D image of the character. 9. An electronic server device (251 ) configured to being paired with an electronic client device (201 ), wherein the electronic server device (251 ) comprises: a memory (253) comprising a plurality of sets of character elements, wherein each set of character elements is configured to separate a respective character of a plurality of characters into a respective subset of the respective set of character elements; and a communication interface (255) configured to provide one or more of the plurality of sets of character elements to the electronic client device (201 ). 10. The electronic server device (251 ) of claim 9, wherein the electronic server device (251 ) further comprises a processor (257) configured to select a set of character elements from the plurality of sets of character elements on the basis of information about the hardware capabilities of the electronic client device (201 ) and wherein the communication interface (255) is configured to provide the selected set of character elements to the electronic client device (201 ).

1 1. The electronic server device (251 ) of claim 10, wherein the communication interface (255) is configured to provide the selected set of character elements to the electronic client device (201 ), in response to a request for a set of character elements from the electronic client device (201 ), wherein the request comprises information about the hardware capabilities of the electronic client device (201 ).

12. An electronic device (231 ) for providing a character code, the electronic device (231 ) comprising: a processor (233) configured to generate a character code representing a character of a plurality of characters by separating a 2D image of the character into a subset of a set of character elements; and a communication interface (235) configured to provide the character code.

13. The electronic device (231 ) of claim 12, wherein the processor (233) is configured to generate the character code representing the character of the plurality of characters such that the character code comprises information for identifying the subset of the set of character elements, information about a respective position of each character element of the subset of the set of character elements and/or information about a respective width and/or height of each character element of the subset of the set of character elements.

14. A method (2000) of providing a character code, the method (2000) comprising: generating (2001 ) a character code representing a character of a plurality of characters by separating a 2D image of the character into a subset of a set of character elements; and providing (2003) the character code. 15. A computer program comprising program code for performing the method (1900) of claim 8 or the method (2000) of claim 14, when executed on a computer or a processor.

Description:
DEVICES AND METHODS FOR DISPLAYING CHARACTERS

TECHNICAL FIELD In general, the present invention relates to character coding. More specifically, the present invention relates to electronic devices and methods for displaying characters on devices with restricted resources.

BACKGROUND

A common approach for displaying Chinese characters on embedded devices, such as smartphones, wearables, and the like, is to use standard character systems inherited from computers. This provides a good basis for interoperability across devices but usually consumes significant memory and computation resources, which can cause performance problems on low-cost embedded devices. As with the Internet of Things (loT) the number of connected devices continues to rise, reducing production cost is a key factor in the commercial success of a product.

There exist variants of encoding schemes for Chinese characters such as GB18030, BIG5. However Unicode's encoding scheme, namely UTF-8, is the most commonly used character encoding scheme worldwide. According to estimates 88% of web pages are using UTF8 encoding. Unicode is the predominant unified standard which defines the supported character sets, the unique code point per character and the encoding schemes. Embedded devices that are required to display Unicode encoded text must support: bitstream decoding (e.g., UTF-8-16-32, GB18030, etc.) and mapping of a character code point to a glyph (graphical rendering of the character). The step of mapping induces a dependency on use of a font file, such as a common True Type Font format. A font file for Chinese characters is about 10 Mbytes in size.

A conventional high-level process 100 for displaying text on an embedded device is shown in figure 1. The text processor 102 has the following main functions: first, the text processor 102 decodes the UTF-8 stream into character code points, and allocates and initializes an intermediate buffer 103 with sufficient memory space for storing the local data structures used for processing and the graphical rendering of the glyph(s). Next, the text processor 102 initializes the local structure necessary for decoding text. The structures are used to define text direction, multiline, character set used, indexing to the font resources, etc. For each character, the text processor 102 maps from character code points to glyph resources. Glyphs can be a dot matrix image or vector based graphics. Then, the text processor 102 draws glyphs into the buffer 103. If the glyphs are dot matrices, color conversion might be applied to adapt to the LCD screen. Another post processing stage may be needed to perform anti-aliasing. In the case of vector based graphics, the process may be divided into multiple stages for drawing anti-aliased contours, and filling the shapes. Finally, the text processor 102 copies the buffer 103 into the display frame buffer 104.

Thus, generally, displaying Chinese characters requires: an external memory module for storing a font database of 10Mbyt.es (i.e., external Flash RW storage); a text processor that typically operates multiple-stage processing (e.g., character decoding, code point mapping, glyph indexing, glyph drawing, rasterization, anti-aliasing, etc.) which consumes significant RAM; and a complex engine with multiple-stage processing for displaying each character, which consumes battery and computation resources inefficiently. In the latter case, real time features may also be degraded.

The above requirements may lead to several problems, such as raising the cost of the device, increasing hardware and software architecture complexity, and impacting the device form factor due to addition of hardware components.

The current approach for displaying characters on low-end devices, such as loT devices or wearables inherits font systems from computers. However, there are several typical device specifications in the wearable segment: less than 1 MB memory available; a microcontroller processing unit with almost no graphics acceleration running at a clock speed below 50MHz; an LCD screen (less than I MPixel resolution, from monochrome to 16 bit color per pixel); no keyboard to input text, small devices essentially show notifications to the user. These specifications raise limitation of using the font systems from computers.

In light of the above, there is a need for improved devices and methods for coding and displaying characters, allowing maintaining memory and computation resources at a minimum without requiring additional hardware. SUMMARY

It is an object of the invention to provide improved devices and methods for coding and displaying characters, allowing maintaining memory and computation resources at a minimum without requiring additional hardware.

The foregoing and other objects are achieved by the subject matter of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.

The embodiments of the disclosure address problems of displaying large sets of characters on devices with limited memory resources (< 1 MB) and low computation capabilities (low-end Microcontroller CPU, no GPU), and use as few resources as possible regarding memory and computation complexity on the host device for displaying a very large set of characters.

According to a first aspect an electronic device for displaying characters is provided, wherein the electronic device comprises a communication interface for receiving a character code, wherein the character code represents a corresponding character of a plurality of characters based on a separation of the character into a subset of a basis set of character elements; a processor configured to generate a 2D image of the corresponding character on the basis of the character code and the set of character elements; and an electronic display configured to display the 2D image of the corresponding character.

Thus, an improved device for displaying characters is provided, allowing maintaining memory and computation resources at a minimum without requiring additional hardware.

In a further possible implementation form of the first aspect, the character code represents the corresponding character of the plurality of characters based on the separation of the character into the set of character elements by comprising information for identifying the subset of the set of character elements, information about a respective two-dimensional position of each character element of the subset of the set of character elements relative to a reference point of the 2D image and/or information about a respective width and/or height of each character element of the subset of the set of character elements relative to the width and/or height of the 2D image. In a further possible implementation form of the first aspect, the processor is further configured to resize, i.e. scale the corresponding character.

In a further possible implementation form of the first aspect, the electronic device further comprises a memory configured to store the set of character elements.

In a further possible implementation form of the first aspect, the electronic device is an electronic client device paired with an electronic server device, wherein the processor is configured to generate the 2D image of the corresponding character on the basis of the character code and a selected set of character elements, wherein the selected set of character elements is selected by the electronic server device from a plurality of sets of character elements and provided to the electronic client device, in response to a request for a set of character elements from the electronic client device. In a further possible implementation form of the first aspect, the request for a set of character elements includes information about the hardware capabilities of the electronic client device.

In a further possible implementation form of the first aspect, the plurality of characters comprise CJK characters and the set of character elements comprises CJK strokes.

According to a second aspect the invention relates to a method for displaying characters, wherein the method comprises: receiving a character code, wherein the character code represents a corresponding character of a plurality of characters based on a separation of the character into a subset of a basis set of character elements; generating a 2D image of the corresponding character on the basis of the character code and the set of character elements; and displaying the 2D image of the corresponding character.

Thus, an improved method for displaying characters is provided, allowing maintaining memory and computation resources at a minimum without requiring additional hardware.

According to a third aspect the invention relates to an electronic server device configured to being paired with an electronic client device. The electronic server device comprises: a memory comprising a plurality of sets of character elements, wherein each set of character elements is configured to separate a respective character of a plurality of characters into a respective subset of the respective set of character elements; and a communication interface configured to provide one or more of the plurality of sets of character elements to the electronic client device.

In a further possible implementation form of the third aspect, the electronic server device further comprises a processor configured to select a set of character elements from the plurality of sets of character elements on the basis of information about the hardware capabilities of the electronic client device and wherein the communication interface is configured to provide the selected set of character elements to the electronic client device. In a further possible implementation form of the third aspect, the communication interface is configured to provide the selected set of character elements to the electronic client device, in response to a request for a set of character elements from the electronic client device, wherein the request comprises information about the hardware capabilities of the electronic client device.

According to a fourth aspect the invention relates to an electronic device for providing a character code, wherein the electronic device comprises: a processor configured to generate a character code representing a corresponding character of a plurality of characters by separating a 2D image of the character into a subset of a basis set of character elements; and a communication interface configured to provide the character code.

In a further possible implementation form of the fourth aspect, the processor is configured to generate the character code representing the corresponding character of the plurality of characters such that the character code comprises information for identifying the subset of the set of character elements, information about a respective two-dimensional position of each character element of the subset of the set of character elements relative to a reference point of the 2D image and/or information about a respective width and/or height of each character element of the subset of the set of character elements relative to the width and/or height of the 2D image.

According to a fifth aspect the invention relates to a method of providing a character code, wherein the method comprises: generating a character code representing a corresponding character of a plurality of characters by separating a 2D image of the character into a subset of a basis set of character elements; and providing the character code. According to a sixth aspect the invention relates to a computer program comprising program code for performing the method according to the second or fifth aspect when executed on a computer. The invention can be implemented in hardware and/or software.

BRIEF DESCRIPTION OF THE DRAWINGS

Further embodiments of the invention will be described with respect to the following figures, wherein:

Fig. 1 shows a schematic diagram of a conventional process for displaying characters on an embedded device; Fig. 2 shows a schematic diagram of an electronic client device paired with electronic server device for displaying characters according to embodiment;

Fig. 3 shows a schematic diagram illustrating a set of 37 CJK strokes used by the devices according to an embodiment;

Fig. 4 shows a schematic diagram illustrating encoding a Chinese character according to an embodiment; Fig. 5 shows a schematic diagram illustrating a scaling operation implemented in the devices according to an embodiment;

Fig. 6 shows a schematic diagram of an electronic client device and an electronic server device for displaying characters according to an embodiment;

Fig. 7 shows a schematic diagram of an electronic client device and an electronic server device for displaying characters according to an embodiment;

Fig. 8 shows a schematic diagram of an electronic client device and an electronic server device for displaying characters according to an embodiment; shows an image with 256x256 bytes in size and with 32 bytes in depth used by a device according to an embodiment; shows an image of a Chinese character used by a device according to an embodiment; shows a schematic diagram of a picture with a size of 5 pixel x 5 pixel comprising different segments used by a device according to an embodiment; shows a schematic diagram illustrating a format of an image header used by a device according to an embodiment; shows a schematic diagram illustrating a format of a line header and a segment header used by a device according to an embodiment; shows a diagram illustrating a procedure for encoding an image according to an embodiment; shows a diagram illustrating a procedure for decoding a segmented image format according to an embodiment; shows a schematic diagram illustrating a high-level view of an encoding system according to an embodiment; shows a diagram illustrating a procedure for encoding a Unicode-encoded string by a CJK stroke recognition module according to an embodiment; shows a diagram illustrating a procedure for encoding an image by a CJK stroke recognition module according to an embodiment; shows a schematic diagram illustrating a method for displaying characters according to an embodiment; and shows a schematic diagram illustrating a method for providing a character code according to an embodiment. In the various figures, identical reference signs will be used for identical or at least functionally equivalent features.

DETAILED DESCRIPTION OF THE EMBODIMENTS

In the following description, reference is made to the accompanying drawings, which form part of the disclosure, and in which are shown, by way of illustration, specific aspects in which the present invention may be placed. It will be appreciated that other aspects may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following detailed description, therefore, is not to be taken in a limiting sense, as the scope of the present invention is defined by the appended claims.

For instance, it will be appreciated that a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa. For example, if a specific method step is described, a corresponding device may include a unit to perform the described method step, even if such unit is not explicitly described or illustrated in the figures. Moreover, in the following detailed description as well as in the claims embodiments with different functional blocks or processing units are described, which are connected with each other or exchange signals. It will be appreciated that the present invention covers embodiments as well, which include additional functional blocks or processing units that are arranged between the functional blocks or processing units of the embodiments described below.

Finally, it is understood that the features of the various exemplary aspects described herein may be combined with each other, unless specifically noted otherwise. Figure 2 shows a schematic diagram illustrating an electronic client device 201 paired with an electronic server device 251 for displaying characters according to an embodiment. The characters can be any sets of characters, such as Chinese characters. The electronic client device 201 can be a smartwatch or any other wearable. The electronic server device 251 can be a smart phone.

The electronic client device 201 comprises a communication interface 203 for receiving a character code, wherein the character code represents a corresponding character of a plurality of characters based on a separation of the character into a subset of a basis set of character elements. According to embodiments of the invention, the characters comprise CJK characters and the set of character elements comprises CJK strokes, which will be described in more detail further below.

The character code represents the corresponding character of the plurality of characters based on the separation of the character into the set of character elements by comprising information for identifying the subset of the set of character elements, information about a respective two-dimensional position of each character element of the subset of the set of character elements relative to a reference point of the 2D image and/or information about a respective width and/or height of each character element of the subset of the set of character elements relative to the width and/or height of the 2D image. Furthermore, the electronic client device 201 comprises a processor 205 configured to generate a 2D image of the corresponding character on the basis of the character code and the set of character elements. Furthermore, the electronic client device 201 comprises an electronic display 207 configured to display the 2D image of the corresponding character and a memory 209 configured to store the set of character elements.

In an embodiment, the processor 205 of the electronic client device 201 is further configured to resize, i.e. scale, the corresponding character and to generate the 2D image of the corresponding character on the basis of the character code and a selected set of character elements, wherein the selected set of character elements is selected by the electronic server device 251 from a plurality of sets of character elements.

Paired with the electronic client device 201 above, the electronic server device 251 comprises a memory 253 comprising a plurality of sets of character elements, wherein each set of character elements is configured to separate a respective character of a plurality of characters into a respective subset of the respective set of character elements; a communication interface 255 configured to provide one or more of the plurality of sets of character elements to the electronic client device 201 ; and a processor 257 configured to select a set of character elements from the plurality of sets of character elements on the basis of information about the hardware capabilities of the electronic client device 201 and wherein the communication interface 255 is configured to provide the selected set of character elements to the electronic client device 201.

According to an embodiment, the communication interface 255 of the electronic server device 251 is configured to provide the selected set of character elements to the electronic client device 201 , in response to a request for a set of character elements from the electronic client device 201 , wherein the request comprises information about the hardware capabilities of the electronic client device 201.

The interaction between the electronic client device 201 and the electronic server device 251 will be illustrated in more detail further below with reference to figures 6 to 8.

Embodiments of the invention use a combination of CJK stroke images to draw any Chinese character from a limited set of stroke images stored locally in the memory 209 of the electronic client device 201 . CJK strokes are defined as the calligraphic strokes needed to write the Chinese characters in regular script used in East Asia. CJK strokes are the classified set of line patterns that may be arranged and combined to form Chinese characters (also known as Hanzi) in use in China, Japan, Korea, and to a lesser extent in Vietnam. There exist several variants of classification sets for strokes of Chinese characters. According to embodiments of the invention, a set of 37 CJK strokes is used, which is depicted in figure 3. However, embodiments of the invention are not restricted to this particular set of strokes and can use other sets of similar strokes. Each stroke image can be encoded into the device memory 209 and accessible to the processor 205 for reading and drawing these images into a frame buffer. The stroke images can be squared images with consistent resolution. The images are encoded so that transparency can be applied when the image is rendered. Therefore, the strokes can be overlaid on top of each other without background pixels.

Figure 2 further shows an electronic device 231 for providing a character code. The electronic device 231 will be described further below in the embodiment of figure 16 and the following. Figure 4 illustrates a character coding implemented in the electronic device 201 for Chinese characters, wherein a Chinese character in a particular font style corresponds to a list of stroke images drawn sequentially and overlaid at specific 2D coordinates, with resized dimensions in order to form the character. When a stroke image is drawn, its background pixels can be ignored and therefore not copied into the final frame buffer. With each stroke identification, the encoding scheme specifies a (x, y) coordinate indicating where to draw the stroke in the final character bitmap, and a target width and target height which are used to scale the original stroke image to the destination character bitmap, as shown in figure 4. The Chinese character ί in figure 4 comprises 4 strokes. In order to draw each stroke correctly, the following information is provided: an index of each stroke image of a set of stroke images, representing each image out of 37 images and taking up one byte or 6 bits memory; X and Y coordinates, wherein each value takes up one byte memory and can express a pixel index or a ratio in percent; scaled width and height, wherein each value takes up one byte and can express a pixel length or a ratio in percentage.

Therefore, the memory requirement can be, but not limited to, 5 bytes for each stroke. As the Chinese character ί comprises 4 strokes, it requires 4 x 5 bytes memory, which requires more memory than a character encoded with Unicode. However, decoding Unicode requires an additional font file containing a sufficient set of glyphs for displaying a limited set of Chinese characters. Because the number of Chinese characters is high (more than 10000), a font file can take up a significant amount of memory, which can likely impact the design of hardware and/or software on low-end embedded devices, such as the electronic device 201 .

Encoding a stroke according to embodiments of the invention can take up less than 5 bytes. For example, a stroke index number out of 37 images can require the bare minimum of bits, which is 6 bits for storing a value between 0 and 63. The coordinates and the scaled dimensions can also be encoded using a smaller range of values (e.g., 0-20, 0- 100, etc.) instead of the range of values when stored in 1 byte (0-255).

The approach used to display characters is simple and can be performed with nearly no intermediate allocation for processing, which is a significant advantage over conventional approaches. Given a list of strokes along with coordinates and scale factors to draw, the character rendering is performed by the electronic device 201 in a single pass, without additional hardware dependency. Below is an example of high-level pseudo code for character rendering: dra _character ( stroke_list [ ] )

{

for each stroke_img in stroke_list [ ]

interpolate_pixels_and_dra (stroke_img, X, Y, , H)

}

Embodiments of the invention allow scaling, which is an important aspect for visual quality. Scaling an image introduces visual artifacts which require processing like pixel interpolation to reduce the visible degradation. It is important to scale images on the fly by reading directly from the original image provided that the processing can read nearest neighbor pixels directly from memory.

According to an embodiment, figure 5 shows a schematic diagram illustrating a scaling operation implemented in the devices, wherein an original image and a scaled image are shown on the left and right respectively. In order to determine the pixel value at coordinates (2, 1 ) on the scaled image, in an embodiment the electronic device 201 can interpolate source pixels that are covered by the reversely scaled pixel from the destination buffer, as highlighted by the equation in figure 5. This can be based on bilinear interpolation.

The coefficients α, β, γ, δ are weight factors corresponding to the intersection area with the reversely scaled destination pixel. interpolate_pixels_and_dra (srclmage, x, y, scaled idth, scaledHeight )

{

For each destPixel in destlmage

If reversely_scaled (destPixel) intersects with srcPixel that are NOT background

destPixel = Average_Intersected_srcPixels blend destPixel into frame buffer

}

Embodiments of the invention allowing for scaling provide the advantage that the rendered strokes are anti-aliased in a single pass. The embodiments of the invention are not limited to a specific scaling implementation. It is possible to use a simpler scaling algorithm, such as an algorithm based on nearest neighbor interpolation, which costs less in computation but does result in aliased images. The intent of the embodiments of the invention is not to store permanently a great amount of encoded Chinese characters on the electronic device 201. Instead, encoded characters for display should be transferred wirelessly to the electronic device 201 for an immediate rendering from a smartphone or from a telecom operator for example.

Assuming a character comprises 10 strokes on average, encoding a character can cost 10 * 5 bytes of data to be transferred. To display a text message of 70 characters which correspond to the maximum length restriction for SMS with Unicode characters, transferring the whole text thus requires transferring about 70 characters * 50 bytes = 3.5 kilo bytes data. Bluetooth Low Energy (4.0 and 4.1 ) data rates on high-end smartphones can range from 2kb/sec to 16kb/sec. Therefore, such a typical text message can be transferred within the order of a second. Data rates are expected to increase with the next generation of the Bluetooth standard. According to an embodiment, an example of a protocol for synchronizing CJK strokes sets between the electronic client device 201 and the electronic server device 251 and showing text onto the electronic client device 201 are shown in the following.

During a first phase as shown in figure 6, the electronic client device 201 can download a set of CJK strokes into its memory 209. First, the electronic client device 201 sends a request for a suitable CJK set to the electronic server device 251. Secondly, the electronic server device 251 selects an optimal CJK set. Thirdly, the electronic server device 251 sends the selected optimal CJK set to the electronic client device 201 . Finally, the electronic client device 201 saves the selected optimal CJK set.

This use case shows how it is possible to enable an already deployed device to support CJK drawing with minimal memory footprint impact. A set of 37 CJK stroke images, each with resolution 50 by 50 pixels, takes up about 30 Kb memory by using the image codec according to embodiments of the invention.

During a second phase as shown in figure 7, the electronic server device 251 synchronizes with the electronic client device 201 in order to acknowledge the supported CJK set on the client device 201 . First, the electronic server device 251 sends a request for a CJK set ID to the electronic client device 201 . Secondly, the electronic client device 201 sends a reply of the CJK set ID to the electronic server device 251 . Thirdly, the electronic server device 251 binds the electronic client device 201 to a known CJK set. During a third phase as shown in figure 8, the electronic server device 251 pushes text of encoded characters onto the electronic client device 201. Next, the electronic client device 201 can display the corresponding characters. Underlying communication protocols can be, but are not restricted to: HTTP, HTTPS, TCP/IP, LTE/5G, Bluetooth, Wifi.

An image codec designed for fast decoding computation and reduced memory footprint usage can be achieved according to embodiments of the invention. However, embodiments of the invention are not restricted to an image codec. Using this codec can achieve text processing with nearly no intermediate buffer required for drawing characters as opposed to prior art.

A new image format can be implemented in order to reduce the memory used to save images, thanks to a quick algorithm with low complexity. The main idea of this format, which is based on a segmentation of an image, is to avoid wasting memory by saving non-useful data. In standard algorithms, each pixel in the picture is stored, but in this segmented image format only useful pixels are saved. A pixel is considered useful if it is not transparent.

Figure 9 shows an example of an image with 256 x 256 bytes in size and with 32 bytes in depth. With a standard storage, the memory used can be 26,2 kbytes (256x256x4 = 26,2k). If only the circle is considered useful and all transparent pixels around the circle not useful, a possible saving for the image storage can be made. In this example the gain is around 20%. The format according to embodiments of the invention just retains the useful pixels without any compression encoder. The larger the transparent area is, the greater the gain is. The gain for a font character can go up to 80%.

Figure 10 shows an example of an image with a font character. The new format according to embodiments of the invention stores useful pixels by describing the picture line by line and segment by segment. A segment is a row of non-transparent pixels in one line. Of course, a line can have zero, one, or several segments.

Figure 1 1 shows an example of different segments in a picture with a size of 5 pixel x 5 pixel, wherein the pixels indicating background are in black. As can be seen in figure 1 1 , each line has a different number of segments. In order to describe the content of an image, 3 different headers can be needed according to embodiments of the invention. The first header type is "image header". There is only one header of this type in the format. It is located at the beginning of the data in order to give general information about the segmented image. The format of this header type is shown in figure 12, wherein the image header comprises 5 entries: "magic", "width", "height", "bpp", and "flag", each entry representing the segment encoded format, image width in pixel, image height in pixel, image bits per pixel for the depth, and some specification such as transparence, respectively. The second head type is "line header". There is a line header for each line in the image (given by height in image header). The purpose of this header is to inform about the number of segments in this line but also to indicate the real line length in order to easily jump to the next line if needed. The third head type is "segment header". The segment header is added at the beginning of a segment if it exists. This header describes the segment by giving the relative offset and the length of this segment. Just after this header, the data pixels can be found with the length indicated in the segment header. Exemplary formats of line and segment headers are shown in figure 13, wherein the line header comprises 2 entries: "lineLen" and "nbrSeg", representing line length in byte and number of segment in the line respectively, and wherein the segment header also comprises 2 entries: "segOffset" and "segLen", representing offset in pixel of the current segment and segment length in byte respectively. Figure 14 shows a diagram illustrating a procedure 1400 for encoding an image according to embodiments of the invention. This functional diagram shows how to generate a segmented image from raw data of an image. The procedure 1400 shown in figure 14 comprises the following steps. First, the algorithm reads the image header in order to get the image format and more specifically the width, height and pixel format, e.g., 8 bits, 16 bits, RGB, RGBA, etc. (block 1401 ). The information is saved in the segmented image header.

After the general information about the picture is obtained, the algorithm can go through each line and save only useful data (block 1403).

Next, the first pixel is read in the line (block 1407). As discussed above, a useful pixel is a non-transparent pixel. If the pixel is a background pixel the algorithm moves to the next pixel in the same line (arrow 3) until it finds a useful pixel or until it reaches the last pixel on this current line. If all pixels on this line are read, the algorithm has to go to the next line (arrow 8).

In case that a useful pixel is found, a new segment has to be written in the segmented image (block 1409).

This new segment is initiated with a header and the offset of the current segment is retained on it (block 141 1 ).

The algorithm goes through pixels and copies them to the segmented image (arrow 6) until it finds a transparent pixel or it reaches the end of the line. In any one of these two cases, an end of segment is detected and an update of this segment header has to be done with the segment length.

If the algorithm is still on the same line, it continues analyzing pixels in that line (arrow 7), and if it reaches the end of the line it jump to the next one (arrow 8).

When the algorithm goes to the next line (arrows 8, 9), an update is needed on the line header in the segmented image.

In fact, the algorithm can save the number of segments found in the line and the segmented line length. If there is no more lines in the image, the encoding is over and the segmented image is ready to be used (block 1405). Figure 15 shows a diagram illustrating a procedure 1500 for decoding a segmented image format according to embodiments of the invention. The procedure 1500 shown in figure 15 comprises the following steps.

First of all, the algorithm reads the header to obtain the information comprising the width, height and pixel format (block 1501 ).

Next, the algorithm can go through each line in the picture (block 1503).

The header line provides the information about how many segments are in that line. If there is no segment, the algorithm jumps to the next line (arrow 2).

In contrast, if there is a segment, the algorithm reads the header segment (block 1507). This header informs the offset of the segment in the line and its length. In that case, the algorithm can copy this segment with the given offset to the target frame buffer.

If there are more segments in the line, the algorithm continues to repeat the same step

(arrow 4).

When there are no more segments, the algorithm jumps to the next line (arrow 5).

When all lines are visited, the decoding is finished (block 1505). Compared to decoding, encoding characters cannot be performed on embedded devices as it is more complex than decoding and displaying. Encoding shall be performed by a more powerful system such as a smartphone or cloud service. Implementation of such a character encoding system on a smartphone or in the cloud is described in more detail further below.

Figure 16 shows a schematic diagram illustrating a high-level view of an encoding system 1600 according to an embodiment, wherein a CJK stroke recognition system 1603, 1605 is used to automatically decompose strokes from a Unicode-encoded character and can run either on a smartphone or in the cloud.

First, a Unicode-encoded stream including four Chinese characters1601 is sent to the CJK stroke recognition system 1603, 1605, which is implemented on a smartphone or in the cloud. Next, the CJK stroke recognition system 1603, 1605 can recognize CJK strokes for each character of the Unicode-encoded stream 1601 and generate encoded characters 1607. Finally, the encoded characters 1607 can be pushed to low-end devices such as the electronic device 201 .

According to an embodiment, an electronic device 231 for providing a character code is provided. The electronic device 231 comprises a processor 233 configured to generate a character code representing a corresponding character of a plurality of characters by decomposing a 2D image of the character into a subset of a basis set of character elements; and a communication interface 235 configured to provide the character code. According to an embodiment, the processor 233 of the electronic device 231 is further configured to generate the character code representing the corresponding character of the plurality of characters such that the character code comprises information for identifying the subset of the set of character elements, information about a respective two- dimensional position of each character element of the subset of the set of character elements relative to a reference point of the 2D image and/or information about a respective width and/or height of each character element of the subset of the set of character elements relative to the width and/or height of the 2D image.

Figure 17 shows a diagram illustrating a procedure 1700 for encoding a Unicode-encoded string by a CJK stroke recognition module according to an embodiment. The procedure 1700 shown in figure 17 comprises the following steps. First, a Unicode-encoded stream including four Chinese characters is decoded to the Unicode code points (block 1701 ).

For each code point, a corresponding encoded character is searched in the database (block 1703). If the corresponding character is found, it is ready for displaying.

If the corresponding character is not found, a bitmap image for the character is rendered (block 1705).

After the bitmap image is rendered, CJK strokes for the character are recognized based on the rendered bitmap image (block 1707).

Next, the recognized CJK strokes for the character are stored in the database (block 1709).

An encoded character is generated based on the recognized CJK strokes (block 171 1 ). The encoded character string can be output to an embedded device such as a wearable for displaying (block 1703). The step that performs CJK-stroke recognition from an image, such as a bitmap image, can be based on convolutional neural network techniques, which are known for performing well on recognizing handwritten text.

Figure 18 shows a diagram illustrating a procedure 1800 for encoding an image by a CJK stroke recognition module according to an embodiment. The procedure 1800 is similar to the procedure 1700 shown in figure 17, wherein the CJK stroke recognition system in figure 17 can also be used to generate encoded characters from an image instead of a Unicode string. The procedure shown in figure 18 comprises the following steps. First, a picture including four Chinese characters is sent to a CJK stroke recognition system (block 1801 ).

CJK strokes for the character on the picture are recognized based on the picture (block 1803).

An encoded character is generated based on the recognized CJK strokes (block 1805). The encoded character string can be output to an embedded device such as a wearable for displaying (block 1807).

Figure 19 shows a schematic diagram illustrating a method 1900 for displaying characters according to an embodiment. The method 1900 comprises the following steps: receiving 1901 a character code, wherein the character code represents a corresponding character of a plurality of characters based on a separation or decomposition of the character into a subset of a basis set of character elements; generating 1903 a 2D image of the corresponding character on the basis of the character code and the set of character elements; and displaying 1905 the 2D image of the corresponding character. Figure 20 shows a schematic diagram illustrating a method 2000 for providing a character code according to an embodiment. The method 2000 comprises the following steps: generating 2001 a character code representing a corresponding character of a plurality of characters by separating or decomposing a 2D image of the character into a subset of a basis set of character elements; and providing 2003 the character code.

While a particular feature or aspect of the disclosure may have been disclosed with respect to only one of several implementations or embodiments, such feature or aspect may be combined with one or more other features or aspects of the other implementations or embodiments as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "include", "have", "with", or other variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprise". Also, the terms "exemplary", "for example" and "e.g." are merely meant as an example, rather than the best or optimal. The terms "coupled" and "connected", along with derivatives may have been used. It should be understood that these terms may have been used to indicate that two elements cooperate or interact with each other regardless whether they are in direct physical or electrical contact, or they are not in direct contact with each other.

Although specific aspects have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific aspects shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the specific aspects discussed herein. Although the elements in the following claims are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence. Many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the above teachings. Of course, those skilled in the art readily recognize that there are numerous applications of the invention beyond those described herein. While the present invention has been described with reference to one or more particular embodiments, those skilled in the art recognize that many changes may be made thereto without departing from the scope of the present invention. It is therefore to be understood that within the scope of the appended claims and their equivalents, the invention may be practiced otherwise than as specifically described herein.