Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SOURCE ENCODING APPARATUS, A DECODING APPARATUS, AND ASSOCIATED METHODS FOR AUDIO BASED COMMUNICATION
Document Type and Number:
WIPO Patent Application WO/2019/106222
Kind Code:
A1
Abstract:
A source encoding apparatus configured to one or more of derive or convert a series of data elements of a data element set, the data elements representing respective partial portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and provide the series of audio output elements for audio output. Also a decoding apparatus configured to convert a series of audio input elements received as audio input into a corresponding series of data elements representing respective partial portions of a complete transmitted data item.

Inventors:
WOLDEGEBRIEL, Michael (Karaportti 4, Espoo, 02610, FI)
PELLIKKA, Jarkko (Karaportti 4, Espoo, 02610, FI)
LINDHOLM, Harri (Karaportti 4, Espoo, 02610, FI)
REMES, Jukka (Karaportti 4, Espoo, 02610, FI)
Application Number:
FI2017/050842
Publication Date:
June 06, 2019
Filing Date:
November 29, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (Karaportti 3, Espoo, 02610, FI)
International Classes:
H04B11/00; H03M7/30; H04B1/713; H04L1/00; H04L5/00; H04L9/08; H04L27/10; H04L29/06; H04M7/12; H04M11/06; H04Q1/444
Domestic Patent References:
WO2016133253A12016-08-25
Foreign References:
US6236339B12001-05-22
US20050238023A12005-10-27
US20140050321A12014-02-20
Other References:
KUO C-C J ET AL: "Design of Integrated Multimedia Compression and Encryption Systems", IEEE TRANSACTIONS ON MULTIMEDIA, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 7, no. 5, 1 October 2005 (2005-10-01), pages 828 - 839, XP011139262, ISSN: 1520-9210, DOI: 10.1109/TMM.2005.854469
None
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (Ari Aarnio, IPR DepartmentKarakaari 7, Espoo, 02610, FI)
Download PDF:
Claims:
Claims

1 . A source encoding apparatus comprising:

at least one processor; and

at least one memory including computer program code,

the at least one memory and the computer program code configured to, with the at least one processor, cause the source encoding apparatus to perform at least the following: one or more of derive or convert a series of data elements of a data element set, the data elements representing respective partial portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

provide the series of audio output elements for audio output.

2. The source encoding apparatus of claim 1 , wherein, to derive the series of data elements into the corresponding series of audio output elements, the source encoding apparatus is configured to:

determine the number of unique data elements required to represent the complete data item, and

allocate a unique audio output element to each identified unique data element.

3. The source encoding apparatus of claim 1 or claim 2, wherein, to convert the series of data elements into the corresponding series of audio output elements, the source encoding apparatus is configured to:

identify, for each data element required to represent the complete data item, a corresponding predetermined audio output element using a data-audio output element look- up table.

4. The source encoding apparatus of any preceding claim, wherein the source encoding apparatus is configured to analyse the complete data item to identify one or more data elements required to represent the complete data item.

5. The source encoding apparatus of any preceding claim, wherein the source encoding apparatus is configured to exclude, from the audio output set, one or more of: one or more particular audio frequencies present in ambient audio output; and one or more particular audio frequencies audible to humans; and

one or more particular audio frequencies audible to one or more animals.

6. The source encoding apparatus of any preceding claim, wherein the source encoding apparatus is configured to:

one or more of derive or convert the series of data elements of the data element set into a corresponding series of data elements by being configured to

one or more of derive or convert a first portion of the series of data elements into a first corresponding series of data elements using a first conversion correspondence;

change from using the first conversion correspondence to using a second different conversion correspondence; and

one or more of derive or convert a second portion of the series of data elements into a second different corresponding series of data elements using a second different conversion correspondence

7. The source encoding apparatus of claim 6, wherein the source encoding apparatus is configured to:

change from using the first conversion correspondence to using a second different conversion correspondence during provision of the series of audio output elements for audio output.

8. A decoding apparatus comprising:

at least one processor; and

at least one memory including computer program code,

the at least one memory and the computer program code configured to, with the at least one processor, cause the decoding apparatus to perform at least the following:

convert a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

9. The decoding apparatus of claim 8, wherein the decoding apparatus is configured to:

access a data-audio input element look-up table providing, for each audio input element, a corresponding data element, and

use the data-audio input element look-up table to convert the series of audio input elements into the corresponding series of data elements representing the complete data item.

10. The decoding apparatus of claim 8 or claim 9, wherein the decoding apparatus is configured to:

convert the series of audio input elements received as audio input into a corresponding series of data elements by being configured to:

convert a first portion of the series of audio input elements received as audio input into a first corresponding series of data elements using a first conversion correspondence;

receive an indication of a change from using the first conversion correspondence to using a second different conversion correspondence; and

convert a second different portion of the series of audio input elements received as audio input into a second corresponding series of data elements using a second different conversion correspondence.

1 1. A computer-implemented method comprising:

one or more of deriving or converting a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

providing the series of audio output elements for audio output.

12. A computer-implemented method comprising:

converting a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

13. A computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor:

one or more of derive or convert a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

provide the series of audio output elements for audio output.

14. A computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor:

convert a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

15. A system comprising a source encoding apparatus and a decoding apparatus, the source encoding apparatus configured to:

one or more of derive or convert a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted to the decoding apparatus, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

provide the series of audio output elements for audio output to the decoding apparatus; and

the decoding apparatus configured to:

convert the series of audio input elements received as audio input into the corresponding series of data elements of the data element set.

Description:
A source encoding apparatus, a decoding apparatus, and associated methods for audio based communication

Technical Field

The present disclosure relates to apparatus and methods associated with audio based communication for transferring a data item, such as a text file or image, from a sending apparatus to a receiving apparatus.

Some examples may relate to portable electronic devices, in particular, so-called hand- portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use). Such hand-portable electronic devices include so-called Personal Digital Assistants (PDAs) and tablet PCs. Certain portable electronic devices may be wearable, such as on the wrist. The portable electronic devices/apparatus according to one or more disclosed example aspects/embodiments may provide one or more audio/text/video communication functions (e.g. telecommunication, videocommunication, and/or text transmission, Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing functions, interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or otherformat and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.

Background

Recent developments in technology include advances in methods of communication between electronic devices, for example for the transfer of data from one device to another.

The listing or discussion of a prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge.

Summary

According to a first aspect, there is provided a source encoding apparatus comprising: at least one processor; and

at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the source encoding apparatus to perform at least the following: one or more of derive or convert a series of data elements of a data element set, the data elements representing respective partial portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

provide the series of audio output elements for audio output.

To derive the series of data elements into the corresponding series of audio output elements, the source encoding apparatus may be configured to:

determine the number of unique data elements required to represent the complete data item, and

allocate a unique audio output element to each identified unique data element.

To convert the series of data elements into the corresponding series of audio output elements, the source encoding apparatus may be configured to:

identify, for each data element required to represent the complete data item, a corresponding predetermined audio output element using a data-audio output element look- up table.

The source encoding apparatus may be configured to analyse the complete data item to identify one or more data elements required to represent the complete data item.

The source encoding apparatus may be configured to determine one or more audio frequencies available to use in the audio output element set.

The source encoding apparatus may be configured to exclude, from the audio output set, one or more of:

one or more particular audio frequencies present in ambient audio output; and one or more particular audio frequencies audible to humans; and

one or more particular audio frequencies audible to one or more animals. The source encoding apparatus may be configured to provide the series of audio output elements for audio output.

The source encoding apparatus may be configured to:

one or more of derive or convert the series of data elements of the data element set into a corresponding series of data elements by being configured to

one or more of derive or convert a first portion of the series of data elements into a first corresponding series of data elements using a first conversion correspondence;

change from using the first conversion correspondence to using a second different conversion correspondence; and

one or more of derive or convert a second portion of the series of data elements into a second different corresponding series of data elements using a second different conversion correspondence

The source encoding apparatus may be configured to change from using the first conversion correspondence to using a second different conversion correspondence during provision of the series of audio output elements for audio output.

According to a further aspect, there is provided a decoding apparatus comprising:

at least one processor; and

at least one memory including computer program code,

the at least one memory and the computer program code configured to, with the at least one processor, cause the decoding apparatus to perform at least the following:

convert a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

Note that, in reference to the sending device/source encoding apparatus, the audio element may be called an audio output element because it is output by the sending device/source encoding apparatus. Similarly, in reference to the receiving device/decoding apparatus, the audio element may be called an audio input element because it is input to the receiving device/decoding apparatus. The audio output elements provided by the source encoding apparatus are the audio input elements received at the decoding apparatus. The term “audio elements” may be used interchangeably with these terms and can be understood in context of whether the audio elements are being provided (as output), being received (as input) or being transferred, for example.

The decoding apparatus may be configured to obtain the complete data item from the corresponding series of data elements.

The decoding apparatus may be configured to:

access a data-audio input element look-up table providing, for each audio input element, a corresponding data element, and

use the data-audio input element look-up table to convert the series of audio input elements into the corresponding series of data elements representing the complete data item.

The data-audio input element look-up table may be provided by the source encoding apparatus as an indication of a mathematical operation for the decoding apparatus to perform on the received audio input elements to obtain the corresponding data element therefrom.

The data-audio input element look-up table may be provided by the source encoder. The data-audio input element look-up table may be available as a predetermined look-up table. The decoding apparatus may be configured to receive the data-audio input element look- up table from a source encoding apparatus. The look-up table may be received directly from the source encoding apparatus to the decoding apparatus, or may be received indirectly, via a third device such as a remote server, other electronic device, or via the“cloud” or internet.

The decoding apparatus may be configured to:

convert the series of audio input elements received as audio input into a corresponding series of data elements by being configured to:

convert a first portion of the series of audio input elements received as audio input into a first corresponding series of data elements using a first conversion correspondence; receive an indication of a change from using the first conversion correspondence to using a second different conversion correspondence; and

convert a second different portion of the series of audio input elements received as audio input into a second corresponding series of data elements using a second different conversion correspondence.

The complete data item may comprise at least one of: text, an image, and a video.

One or more of the source encoding apparatus and the decoding apparatus may be one or more of: an electronic device, a portable electronic device, a wearable device, a wristband device, a fitness tracker device, a smartwatch, a portable telecommunications device, a mobile phone, a smartphone, a personal digital assistant, a tablet, a desktop computer, a laptop computer, a server, and a module for one or more of the same.

According to a further aspect, there is provided a computer-implemented method comprising:

one or more of deriving or converting a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

providing the series of audio output elements for audio output.

According to a further aspect, there is provided a computer-implemented method comprising:

converting a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated or understood by the skilled person.

According to a further aspect, there is provided an apparatus comprising means for:

one or more of deriving or converting a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

providing the series of audio output elements for audio output.

According to a further aspect, there is provided an apparatus comprising means for:

converting a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

According to a further aspect, there is provided a computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor:

one or more of derive or convert a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

provide the series of audio output elements for audio output. According to a further aspect, there is provided a computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor:

convert a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein

each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

Corresponding computer programs for implementing one or more steps of the methods disclosed herein are also within the present disclosure and are encompassed by one or more of the described examples.

One or more of the computer programs may, when run on a computer, cause the computer to configure any apparatus, including a battery, circuit, controller, or device disclosed herein or perform any method disclosed herein. One or more of the computer programs may be software implementations, and the computer may be considered as any appropriate hardware, including a digital signal processor, a microcontroller, and an implementation in read only memory (ROM), erasable programmable read only memory (EPROM) or electronically erasable programmable read only memory (EEPROM), as non-limiting examples. The software may be an assembly program.

One or more of the computer programs may be provided on a computer readable medium, which may be a physical computer readable medium such as a disc or a memory device, may be a non-transient medium, or may be embodied as a transient signal. Such a transient signal may be a network download, including an internet download.

According to a further aspect, there is provided a system comprising a source encoding apparatus and a decoding apparatus,

the source encoding apparatus configured to:

one or more of derive or convert a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted to the decoding apparatus, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and

provide the series of audio output elements for audio output to the decoding apparatus; and

the decoding apparatus configured to:

convert the series of audio input elements received as audio input into the corresponding series of data elements of the data element set.

The source encoding apparatus may be configured to, prior to providing the series of audio output elements for audio output, undergo a handshaking operation with the decoding apparatus.

The handshaking operation may comprise providing, to the decoding apparatus, from the source encoding apparatus, one or more of:

data representing the correspondence between the audio input elements in the series of audio input elements and the corresponding data elements in the series of data elements;

data indicating initiation of data element transmission to the decoding apparatus; and

data indicating a time gap to be used between transmitted audio output elements.

The handshaking operation may comprise providing, to the source encoding apparatus, from the decoding apparatus, one or more of:

an indication of one or more audio frequencies present in ambient audio output at the decoding apparatus which are not to be used in the audio output element set; and data indicating a mathematical operation to use to derive a data element of the data element set and obtain the corresponding audio output element.

The present disclosure includes one or more corresponding aspects, examples or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means (e.g. data element set deriver, data element set converter, audio output element provider, received audio input element converter) for performing one or more of the discussed functions are also within the present disclosure.

The above summary is intended to be merely exemplary and non-limiting.

Brief Description of the Figures

A description is now given, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 shows an example apparatus according to the present disclosure;

Figures 2a and 2b illustrate examples of communicating apparatuses according to the present disclosure;

Figure 3 illustrates an example system comprising a source encoding apparatus and decoding apparatus undergoing a handshake procedure according to the present disclosure;

Figure 4 illustrates an example system comprising a source encoding apparatus and decoding apparatus undergoing a handshake encryption procedure according to the present disclosure;

Figure 5 illustrates an example of encoding data elements as audio output elements according to the present disclosure;

Figure 6 illustrates examples of transmitting a segmented data item as segments of audio output elements according to the present disclosure;

Figure 7 illustrates an example of encoding and transmitting a data item as audio output elements according to the present disclosure;

Figure 8 illustrates an example of converting a data item into a series of text data prior to converting to audio output elements according to the present disclosure;

Figure 9 illustrates an example of a source encoding apparatus and decoding apparatus undergoing a handshake procedure according to the present disclosure;

Figure 10 illustrates an example of a source encoding apparatus and decoding apparatus undergoing a handshake encryption procedure according to the present disclosure;

Figure 11 illustrates an example of a source encoding apparatus encryption data item according to the present disclosure;

Figure 12 illustrates an example of audio output data transmission according to the present disclosure; Figure 13 illustrates an example of converting a DNA sequence data item into a series of text data prior to converting to audio output elements according to the present disclosure; Figure 14 illustrates an example of encrypted audio output data transmission according to the present disclosure;

Figure 15 shows the main steps of a method performed at a source encoding apparatus; Figure 16 shows the main steps of a method performed at a decoding apparatus; and Figure 17 shows a computer-readable medium comprising a computer program configured to perform, control or enable the method of Figures 15 and 16.

Description of Specific Examples

Communication between devices is a basic and important technology. In many aspects of the current world, electronic devices are used for communication. The usability of communication systems depends on the efficiency of data transfer between devices. Efficiency is dependent upon how fast the devices in question can start communicating, and how fast the devices are capable of transferring data.

Currently, communication between devices may take place using wired or wireless (e.g. WiFi, Bluetooth, Infrared, etc.) means. Wireless communication systems may provide better efficiency than wired system, in terms of flexible application design and flexibility of use. A new, cheap and efficient communication means can add significant value to existing communication methods, especially if the new technology results in a reduction of engineering complexity and maintenance costs.

Examples disclosed herein propose using sound waves of different frequencies for cheap and efficient communication between devices. The communication may include use of encryption and decryption algorithms. A system using communication methods disclosed herein may include a sending device (SD), a receiver device (RD), and may use encryption algorithms (EA) and decryption algorithms (DA). Such a system may use handheld devices as well as non-handheld/non-portable devices.

Information transfer processes disclosed herein may be used to transfer any type of information, including still images, video images, text (such as prose, documents, data readings) and other data. One application is for the transfer of data/information efficiently between digital health devices. Certain examples may provide for communication methods which are cheaper, more efficient and have a simpler technological design than existing methods.

An example source encoding apparatus (a sending device) of the present disclosure comprises: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the source encoding apparatus to perform at least the following: one or more of derive or convert a series of data elements of a data element set, the data elements representing respective partial portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and provide the series of audio output elements for audio output.

An example decoding apparatus (a receiving device) of the present disclosure comprises: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the decoding apparatus to perform at least the following: convert a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item, wherein each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

In some examples, a wide range of audio frequencies may be used in the audio input/output element set. The range of frequencies may be limited depending on the intended area of use. Factors such as whether the sender and receiver devices are indoors or outdoors may determine what frequencies may be omitted from the audio input/output element set because they are present in the surrounding ambient environment. The distance between the sending device and the receiving device may affect which frequencies are best suited for communication and thus inclusion in the audio input/output element set. If the devices are human-operated then excluding frequencies from the audio input/output element set which are audible to humans may be desirable. If the devices are fully machine operable away from human (or animal) presence, then no such audible (to humans/animals) frequencies need be excluded from the audio input/output element set. For human operated devices, or devices operated in audible proximity to humans, infra (<20Hz) and ultra (>20KHz) sound frequencies may be used in the audio input/output element set, for example. Any flexible range of frequencies may be used in the audio input/output element set for only machine operated devices.

Also, the complexity of the data item/information to be transferred, which is also dependent on the initial raw-data representation (i.e. hexadecimal, binary etc.) may determine which frequencies are used in the audio input/output element set to represent the data item. For example, a simple text file consisting of a combination of A-Z’ alphabetical letters may require a smaller range of frequencies to represent the data item than a high-resolution image, depending on what type of data representation, (e.g. hexadecimal, binary) is used before conversion into audio elements and audio transfer.

How quickly data can be transferred depends on the speed of sound, which is affected by, for example: the medium through which the sounds travel (for example, sound travels more quickly through water than through air); the amplitude of the sound waves; and the distance between the sending device and the receiving device.

Figure 1 shows an apparatus 120 which may form part of a source encoding apparatus, and/or part of a decoding apparatus as disclosed herein. The apparatus 120 comprises a processor 122 and memory 124 (including computer program code) and a transceiver 126, which are electrically connected to one another by a data bus 128. The apparatus 120 of Figure 1 also comprises an audio interface 130 comprising at least one microphone 132 configured to receive audio input, and at least one audio speaker 134 configured to provide audio output.

If the source encoding device also transmits the audio elements as an audio signal it comprises at least one speaker 134 for audio output. If the source encoding device performs the encoding and transmits a signal to a separate sending device to transmit the audio elements as an audio signal, the source encoding device may not comprise at least one speaker 134 but the sending device comprises at least one speaker 134 for audio output. Similarly, if the decoding device also receives the audio elements as an audio signal it comprises at least one microphone 132 for audio reception. If the decoding device performs the decoding but does not receive the transmitted audio signal itself, it may receive a communication indicating the received audio elements from a separate receiving device. In this case the decoding device may not comprises at least one microphone 136 but the receiving device comprises at least one microphone 132 for audio reception.

The processor 122 may be configured for general operation of the apparatus 120 by providing signalling to, and receiving signalling from, the other components to manage their operation. The memory 124 may be configured to store computer code configured to perform, control or enable operation of the apparatus 120. The memory 124 may also be configured to store settings for the other components (including, for example, a data element converter or audio element provider of a source encoding apparatus, or an audio input element converter of a decoding apparatus). The processor 122 may access the memory 124 to retrieve the component settings in order to manage the operation of the other components.

The transmitter 126 may comprise a separate transmitter and receiver and is configured to transmit data to, and receive data from, one or more other devices via a wireless or a wired connection. For example, if the apparatus 120 forms part of a source encoding apparatus, or a decoding apparatus, the transceiver 126 may be configured to receive information from a data-audio output element look-up table for encoding data elements into audio output elements, or decoding received audio input elements into data elements.

Figures 2a and 2b illustrates examples of communicating apparatuses according to the present disclosure. In Figure 2a, a sending device (source encoding apparatus) 200 and a receiving device (decoding apparatus) 250 are schematically shown in communication with each other. In this example a user wearable electronic device such as a smartwatch or wrist-worn fitness monitor, or a user peripheral device 200, is in communication with a user portable electronic device 250 such as a mobile telephone, smartphone, PDA or tablet computer. Of course, in other examples the user portable electronic device 250 may be the receiving device and the user wearable/peripheral device 200 may be the receiving device. Figure 2b schematically illustrates a sending device (source encoding apparatus) 200 and a receiving device (decoding apparatus) 250 in communication with each other. In this example, the two devices are submarines underwater. The examples of Figures 2a and 2b may be considered to show a system comprising a source encoding apparatus 200 and a decoding apparatus 250. The source encoding apparatus 200 is configured to: one or more of derive or convert a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted to the decoding apparatus 250, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item; and provide the series of audio output elements for audio output to the decoding apparatus 250; and the decoding apparatus 250 configured to: convert the series of audio input elements received as audio input into the corresponding series of data elements of the data element set.

The process of transmission of a data item as a series of audio elements may take place in four steps: Device Handshake, Data Encryption, Data Transmission, and Data Decryption and Reconstruction. These steps are discussed in relation to Figures 3-7, Figures 8-12, and Figures 13-14. The following acronyms are used in these figures and description:

SD = Sending Device

RD = Receiver Device

DHS = Device Handshake

DE = Data Encryption

EA = Encryption algorithm

DT = Data Transmission

DC = Data class, to label the type of data to be transmitted (e.g. text, image, video) and which is understood by the sending device and the receiving device

DDR = Data Decryption and Reconstruction

DA = Decryption algorithm

SW = Sound wave (audio element)

FQ = Frequency of a sound wave

FTg = A frequency tag indicating a specific instruction during communication between the sending and receiving devices.

RFTg = A randomly-generated frequency tag

MFTg = A mathematically generated frequency tag generated by a mathematical operation CFTg = A completion indication frequency tag indicating the end of a transmission TG = Time Gap, present between transmitted sound waves/audio elements which is used as part of the encryption process.

TSSWD = Time-series-sound wave-data, which is the sound wave/audio element representation of the data item being transmitted from the sending device to the receiving device

Also, note that the source encoding apparatus may be the sending device or may be a sub- module/component of the sending device. In some examples the source encoding device may be separate from and in communication with the sending device (e.g. if the data elements are encoded as audio elements which are communicated from the source encoding apparatus to a separate sending device comprising a transmitter/speaker for audio transmission). Similarly, the decoding apparatus may be the receiving device or may be a sub-module/component of the receiving device. In some examples the decoding apparatus may be separate from and in communication with the receiving device (e.g. if the data elements are encoded as audio elements which are received by the receiving device which comprises a receiver/microphone for audio reception, and the received audio is communicated to the decoding device).

To detect the audio signal at the decoding apparatus, the provided audio signal should have sufficient amplitude. This may be addressed as an optimization problem during device manufacturing (design). If the influence of the amplitude is assumed to be constant during data transfer, initially the amplitude may be optimised based on what is suitable for the distance between the sending device and the receiving device for efficient data transfer, and a calibration curve may be generated. That way, when the sending and receiving devices are placed for practical usage prior to data transfer, the devices may obtain the distance between them (for example, by using one or more additional sensors to detect the distance between them), and the amplitude may be adjusted based on the calibration curve.

Figures 3-7 illustrate a full procedure of data encryption into audio output elements, and transmission.

Figure 3 illustrates an example system comprising a source encoding apparatus 300 and decoding apparatus 350 undergoing a handshake procedure. The source encoding apparatus 300 may be configured to, prior to providing the series of audio output elements for audio output, undergo a handshaking operation with the decoding apparatus 350. An aim of a device handshake procedure between the source encoding apparatus 300 and decoding apparatus 350 is to initiate and establish a secure connection between the devices. In this example, the frequency range of audio output elements to be used during the data transfer, a frequency tag FTg indicating the identity of the sending device, a time gap TG between audio element transmissions, and the data class DC indicating the type of data to be transferred during the data transfer process are defined in this step.

Initially, the data item to be transferred (image, text, etc.) can be introduced in to the sending device by means of keyboard, audio communication, file opening, file saving, data upload etc.

To initiate the handshake, both the sending device 300 and receiving device 350 will have a built-in frequency tag (FTg) and time gap (TG). This built-in FTg and TG can be setup by the manufacturer or the operator who will be handling the device 300, 350. To establish the initial connection, the receiving device 350 awaits a sound wave signal 302 with the built-in FTg and TG from the sending device 300. While a repetition of three FTgs is shown 302 in Figure 3, any number may be used. The sound-wave based tags FTg can help the receiving device 350 identify/detect that an incoming sound wave signal with specific information is about to be transmitted from the sending device 300. They may be thought of as the sending device 300“calling the name” of the receiving device 350 to indicate that data transmission to the receiving device 350 will occur soon.

Once the receiving device 350 recognises the initialization FTg and TG from the sending device 300, it responds back to the sending device with a sound wave signal 304 with a randomly generated frequency tag (RFTg) and the same frequency tag (FTg) as before. The randomly generated frequency tag (RFTg) is sent to be used in the next stage of encryption of the data to be transmitted, to help prevent interception of the transmitted data. The receiving device 350 can then receive the third sound wave signal 306 which includes the same frequency tag (FTg) and the randomly generated frequency tag (RFTg), as an acknowledgement of receipt of the previous sound wave signal 304 and confirmation that the randomly generated frequency tag (RFTg) was correctly received.

These steps complete the initialization handshake between the sending device and the receiving device, and thus both devices will be ready for further communication to establish the encryption algorithm and decryption algorithm parameters. For a more secure connection, the handshake initialization process of Figure 3 can take place more than once during the data transfer process.

It may be said that the handshaking operation may comprise providing, to the decoding apparatus 350, from the source encoding apparatus 300, one or more of: audio data indicating initiation of data element transmission to the decoding apparatus (FTg) and audio data indicating a time gap to be used between transmitted audio output elements (TG). In some examples data representing the correspondence between the audio input elements in the series of audio input elements and the corresponding data elements in the series of data elements may be transmitted in the handshaking operation (for example, the randomly generated frequency tag (RFTg), or a data-audio input element lookup table).

Once the initial connection between the sending device 300 and the receiving device 350 has been established, the following steps can be used to generate a way of encrypting the data item for transfer. At this stage both the sending device and the receiving device recognise the FTg, TG and DC from the procedure shown in Figure 3.

Figure 4 illustrates an example system comprising a source encoding apparatus 300 and decoding apparatus 350 undergoing a handshake encryption procedure. An encryption algorithm is used to establish a set of parameters for mathematical functions that determine the data conversion and encryption technique to be utilized. Figure 4 illustrates the stage in which a framework for the encryption algorithm/encryption strategy is established. This process depends on the data class that is to be transferred and thus determines the complexity and the range of frequencies to be used in the audio element set.

The encryption strategy in this example relies on built-in mathematical functions available at both the sending device and the receiving device. However, the parameters of these functions are determined during the handshake process to reduce the risk of interception of the transferred data (which may occur using, e.g. predetermined fixed values of audio frequency for each data element of the data item).

After initialization (Figure 3) has been completed, the receiving device 350 will proceed by sending a sound wave signal 308 including a frequency tag MFTgi generated by a mathematical operation. MFTgi takes in RFTg generated during the initialization step 304 as an input. The sending device 300 will receive this signal 308 and proceed to perform a series of mathematical operations to establish parameters for mathematical functions that will determine the time gap (TG) and time series sound wave data (TSSWD) generation strategy. These parameters MFTg 2 are then shared 310 with receiving device 350 by the sending device 300. The receiving device 350 then confirms that the mathematical functions MFTg 2 have been received 312 using specific frequency sound waves FTg which are recognisable to both devices. The confirmatory frequency sound tags FTg transmitted 312 following sharing of the mathematical functions information MFTg 2 310 may be labelled CFTg (“confirmatory” frequency tag).

Thus, the handshaking operation may comprise providing, to the source encoding apparatus 300, from the decoding apparatus 350, data 308 indicating a mathematical operation MFTgi to use to derive a data element of the data element set and obtain the corresponding audio output element. The source encoding apparatus 300 may respond to the decoding apparatus 350, with data 310 indicating a further mathematical operation MFTg 2 to use derive a data element of the data element set and obtain the corresponding audio output element.

It may be that the encryption method to be used to device the data elements to audio output elements is changed part way through derivation of the data item, and/or part way through transmission of the audio output elements representing the data item (that is, the source encoding apparatus may be configured to change from using the first conversion correspondence to using a second different conversion correspondence during provision of the series of audio output elements for audio output).

In other words, the source encoding apparatus may be configured to one or more of derive or convert the series of data elements of the data element set into a corresponding series of data elements by being configured to: one or more of derive or convert a first portion of the series of data elements into a first corresponding series of data elements using a first conversion correspondence; change from using the first conversion correspondence to using a second different conversion correspondence; and one or more of derive or convert a second portion of the series of data elements into a second different corresponding series of data elements using a second different conversion correspondence.

This may be represented in Figure 3 by a further arrow from the receiving device 350 to the sending device 300 to indicate use of a different random frequency tag RFTg 2 similar to the previous indication 304, followed by an acknowledgement by the sending device 300 to the receiving device 350 confirming the random frequency tag RFTg 2 is correctly received similar to the previous acknowledgement 306. Then, in Figure 4, the change in encryption may be represented by a further arrow from the sending device 300 to the receiving device 350 to indicate use of a different mathematical operation frequency tag MFTg2 based on the new random frequency tag RFTg 2 similar to the previous indication 310, followed by an acknowledgement similar to the previous acknowledgement 312 by the receiving device 350 to the sending device 300 indicating completion of the encryption initialisation.

A change in encryption method may be used to increase the security of the data transfer from the sending to the receiving device. The more the encryption method is changed, the more difficult it is for the transmitted data to be intercepted and decrypted. In some examples, further coding on the carrier wave may be performed (which may be termed “transport coding”) to provide additional complexity of audio output signal and thereby increased security of data transfer. In some examples, depending on the level of data encryption performed, transport coding may not be required.

Figure 5 illustrates an example of deriving/encoding data elements 502, 504, 506, 508 as audio output elements 522, 524, 526, 528. The encoding may be considered“deriving” because the audio output elements 522, 524, 526, 528 are derived from a series of mathematical operations 512, 514, 516, 518 indicated by mathematical operation frequency tags MFTg x . The data elements 502, 504, 506, 508 represent respective partial portions of a complete data item to be transmitted. In this simplified example, the data item is the text string “ABCD”. The data elements “A”, “B”, “C” and “D” are derived/encoded into a corresponding series of audio output elements 522, 524, 526, 528 from an audio output element set. That is, the input data“ABCD” is translated in to time series sound wave data (TSSWD) based on the encryption algorithm established during the handshake phase. This translation process is performed by an algorithm that takes as an input the parameters of the encryption algorithm determined during the device handshake and the digital raw data “ABCD”, giving the TSSWD as an audio output.

Each audio output element 522, 524, 526, 528 has a unique output frequency in the audio output set (indicated in the figure as the later data elements having a shorter wavelength audio output element than earlier audio output elements). The audio output set 522, 524, 526, 528 comprises respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item. To derive the series of data elements into the corresponding series of audio output elements, the source encoding apparatus may be configured to determine the number of unique data elements required to represent the complete data item (four in the example of Figure 5), and allocate a unique audio output element to each identified unique data element. In this example, the character“A” 502 is represented by audio output element 522 which is derived using the frequency tag MFTg 512, the character“B” 504 is represented by audio output element 524 which is derived using the mathematical operation frequency tag MFTg 514, the character“C” 506 is represented by audio output element 526 which is derived using the mathematical operation frequency tag MFTg 516 and the character“D” 508 is represented by audio output element 528 which is derived using the mathematical operation frequency tag MFTg 518. The end of data element transmission is indicated to the receiving device using the completion frequency tag CFTg 520. This deriving process may be done before or after any data compression.

The encryption process can involve one of several combinatorial possibilities. For example, there may be a wide choice for the audio frequency to be used to represent each data element. For example,“A” may be represented by a 10Hz signal, a 20Hz signal, a 50Hz signal etc. The length/duration of each signal may also vary. For example, the letter“A” may be three time gaps (TG) in length, whereas a“B may be five time gaps (TG) in length. Further still, each data element need not be an individual letter. For example, for transmission of text, common words such as“and”,“the”,“yes” and“no” may have a unique audio output element representing each entire word. More than one mathematical operation may be used to derive one or more of the audio output elements from the corresponding data element in some examples. Further, the input data may be first converted to representation in a different basis, for example in hexadecimal, octal, binary, ASCII, etc prior to deriving into audio output elements.

The source encoding device may provide the series of audio output elements for audio output (for example, by providing an electronic signal to a speaker, such as speaker 134 in Figure 1 , the electronic signal indicating to the speaker 134 to output the audio elements representing the data elements). This is indicated in Figure 6.

Figure 6 illustrates examples of transmitting a data item as segments of audio output elements. Following derivation/conversion of the data item into a string of audio output elements for transmission (i.e. as time series sound wave data (TSSWD)) the TSSWD can be transmitted from the sending device to the receiving device. Figure 6 illustrates a data item 600 being split into segments 602. Each segment may comprise a combination of one or more: audio output elements, time gaps, and a“instructive” frequency tag (i.e. not an audio output element representing a data element of the data element set) such as a completion frequency tag.

Figure 6 shows a 1 D data item (i.e. a text string) 600 split into segments 602 comprising example segments of: a segment 604 with an audio output element representing a first (“A”) data element, a time gap, and a frequency tag, a segment 606 with an audio output element representing a second (“B”) data element and a time gap, a segment 608 with an audio output element representing a third (“C”) data element, a segment 610 with an audio output element representing a time gap, a segment 612 with an audio output element representing a fourth (“D”) data element and a time gap, and a segment 614 with an audio output element representing the second (“B”) data element again followed by a frequency tag (e.g. a completion frequency tag (CFTg)).

Figure 6 also shows a 3D data item (i.e. a 3D matrix of text representing a photographic image) 620 split into segments 622 comprising example segments of: a segment 624 with an audio output element representing a first (“A”) data element, a time gap, and a frequency tag, a segment 626 with an audio output element representing a second (“B”) data element and a time gap, a segment 628 with an audio output element representing a third (“C”) data element, a segment 630 with an audio output element representing a time gap, a segment 632 with an audio output element representing a fourth (“D”) data element and a time gap, and a segment 634 with an audio output element representing the second (“B”) data element again followed by a frequency tag (e.g. a completion of line frequency tag (CFTg)). Further segments may then follow for the other rows in the 3D array 622.

Figure 7 illustrates an example of a complete procedure of encoding and transmitting a data item as audio output elements according to the present disclosure. While the sending device SD 702, 712, 720, 730 and receiving device RD 702, 712, 720, 730 are shown several times in this figure, the process is a back-and-forth process between a sending device 702, 712, 720, 730 and a receiving device 702, 712, 720, 730 which are provided with different reference numerals to represent different stages of the handshaking and encryption strategy setting steps prior to data transmission 731.

An input data item 700 (an image in this example) is provided 701 to the sending device 702 which converts the image 700 into data decimal format 704, which may be transformed into data hexadecimal format 706 (the choice of what data format to convert a data item into may depend on the initial size of the data item and the number of different data elements in the data set required to represent the data item, for example). A source encoding device (which may be part of, or may be, the sending device 702, or may be separate from and in communication with the sending device 702) may be configured to analyse the complete data item 700 to identify one or more data elements required to represent the complete data item.

The data elements in the data element set 704, 706 represent respective partial portions of the complete data item 700 to be transmitted. In other examples, the sending device 702 may not perform the conversion of the data item into a different data format (e.g. decimal) but the sending device may be provided with the data item represented in a different data format by a different apparatus (not shown).

Once the data item 702 is represented as a series of data elements in a data element set 704, 706, the source encoding device provides the data elements 707 ready for transmission following the handshake initialisation 708 described with reference to Figure 3. Thus, to establish the initial connection, the receiving device 709 awaits a sound wave signal 708 with the built-in FTg and TG from the sending device 702. Once the receiving device 709 recognises the initialization FTg and TG from the sending device 702, it responds 710 back to the sending device 712 with a sound wave signal 710 with a randomly generated frequency tag (RFTg) and the same frequency tag (FTg) as before.

At this stage, the source encoding device (and/or the decoding apparatus 709) may perform a filter generation step to detect the frequencies of one of more interference noises present in the environment (for example, background noises from the room or area in which the source encoding apparatus and/or the decoding apparatus is located), and exclude those ambient audio frequencies from being used to represent data elements to be transmitted. Such a sound wave filtering algorithm may be used to reduce the effect of interference during the data transfer process.

Figure 7 shows this being performed at the receiving device (the decoding apparatus) 709. That is, the source encoding apparatus 702, and/or the decoding apparatus 709 may be configured to exclude, from the audio output set, one or more particular audio frequencies present in ambient audio output. Put otherwise, the handshaking operation may comprise providing 71 1 , to the source encoding apparatus 712, from the decoding apparatus 709, an indication of one or more audio frequencies present in ambient audio output at the decoding apparatus which are not to be used in the audio output element set. In this way, the data transmission is less likely to be corrupted due to detection of ambient audio frequencies creating noise in the signal representing the transmitted data item received from the sending apparatus.

In some examples, the source encoding apparatus and/or the decoding apparatus may be configured to exclude, from the audio output set, one or more of: one or more particular audio frequencies audible to humans; and one or more particular audio frequencies audible to one or more animals. In this way, the sending and receiving devices (e.g. the source encoding apparatus and the decoding apparatus) may be used without producing audio output which is audible to humans or animals, which may be annoying or distracting for them.

In this example, the frequencies to be omitted from the audio output element set, as determined at the receiving device 710, are transmitted 71 1 to the sending device 712 which then performs completion of the handshake initialisation 714 by an acknowledgement signal 714 being transmitted from the sending device 712 to the receiving device 716.

The encryption strategy as detailed in relation to Figure 4 is then performed by the receiving device 716 communicating the mathematical operations 718 to perform to encrypt the data elements and thereby derive the audio output elements representing the data elements to the sending device 720. The sending device (source encoding apparatus) 720 now has information about which audio frequencies to filter out from the audio output element set, and the input MFTgi and output mathematical operations MFTg2 to be used to encrypt/derive the audio output elements from the data elements representing the data item 700. The encryption initialisation stage is completed as described in relation to Figure 4 by providing the output mathematical operation MFTg2 to the receiving device 724 which acknowledges receipt 726.

As discussed in relation to Figure 4, the back and forth communication between the sending device 702, 712, 720, 730 and the receiving device 709, 716, 724, 734 establishes the necessary parameters for the built-in mathematical operations, resulting in both devices recording the necessary parameters for audio transmission, thus making the encryption algorithm and decryption algorithm compatible. Then, the data elements are encrypted/derived into the audio output elements by the sending device (source encoding apparatus) 730. Thus the source encoding device is configured to derive a series of data elements of a data element set, into a corresponding series of audio output elements from an audio output element set. Each audio output element has a unique output frequency in the audio output set. The audio output set is configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item 700.

In some examples, to derive the series of data elements into the corresponding series of audio output elements, the source encoding apparatus may be configured to determine the number of unique data elements required to represent the complete data item, and allocate a unique audio output element to each identified unique data element. For example, the source encoding apparatus may determine that an image comprises 46 colours and thereby determine that 46 different audio output frequencies are required to represent the data item as a series of audio output elements.

In the above examples, the source encoding apparatus uses mathematical operations to generate encrypted audio output elements from the data elements. In some examples, the source encoding apparatus may be configured to convert the series of data elements into the corresponding series of audio output elements by identifying, for each data element required to represent the complete data item, a corresponding predetermined audio output element using a data-audio output element look-up table. This method may not be as secure as a method using encryption by performing mathematical operations on the source data, but may be useful in some examples.

For example, if a gene sequence is to be transferred from one device to another, then only four audio output elements would be required, to represent each of the G, T, A and C nucleotides. The data item may be relatively large, but because the number of unique data elements making up the data item is relatively small (four items), unless higher security transmission is required, using a look up table may be a quicker method of data transmission because the encryption stages may be omitted from the data conversion and transmission process.

The source encoding apparatus 730 then provides 731 the series of audio output elements 732 for audio output. That is, the audio output elements 732 are transmitted 731 to the receiving device 734 for decoding to obtain the data elements and therefrom the data item. That is, after the handshake has been completed 726, the receiving device 734 awaits an incoming audio element/sound wave that begins with the MFTg established during the handshake. Once an incoming audio signal/sound wave comprising MFTg is detected by the receiving device 734, the receiving device 734 starts recording the incoming signal 732 until a completion frequency tag audio signal CFTg is received.

Once the audio output elements 732 are transmitted 731 , they are received by the receiving device 734 (which may comprise the decoding apparatus or may pass the received audio elements to the decoding apparatus). The decryption algorithm then takes the received and recorded TSSWD (received audio input elements) as an input and deciphers and reconstructs the original data item 700 using the mathematical operations containing parameters that are compatible with the encryption algorithm.

The decoding apparatus 734 comprises at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the decoding apparatus to perform at least the following: convert a series of audio input elements 732 received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item 700, wherein each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item 700.

In this example, the decoding apparatus is configured to obtain the data elements representing the data item 700 from the received audio input elements 732 using a decryption process agreed in the earlier handshaking procedure 708, 712, 714, 718, 722, 726. That is, the source encoding apparatus may provide an indication of a mathematical operation (MFTg x ) for the decoding apparatus to perform on the received audio input elements to obtain the corresponding data element therefrom. Such a mathematical operation may be considered to be a data-audio input element“look-up table” or conversion method. In examples where the mathematical function used to convert data elements to audio elements (at the source encoding apparatus) and vice versa (at the decoding apparatus) changes during data element conversion and/or audio element transmission, the mathematical operation may be considered to be dynamic as it varies during the data transmission/reception process.

In other examples, the decoding apparatus may be configured to access a data-audio input element look-up table providing, for each audio input element, a corresponding data element, and use the data-audio input element look-up table to convert the series of audio input elements into the corresponding series of data elements representing the complete data item. As described above, using a look-up table method may be less secure than an encryption method but may be suitable for large data items comprising relatively few different data elements for lower security, quicker data transfer.

As described in relation to encryption using different encryption keys during data conversion to audio elements and/or transfer of the audio elements, the decoding apparatus may be configured to convert the series of audio input elements received as audio input into a corresponding series of data elements by being configured to: convert a first portion of the series of audio input elements received as audio input into a first corresponding series of data elements using a first conversion correspondence; receive an indication of a change from using the first conversion correspondence to using a second different conversion correspondence; and convert a second different portion of the series of audio input elements received as audio input into a second corresponding series of data elements using a second different conversion correspondence. If this is the case than multiple handshaking and encryption strategy stages would be performed (i.e. re-performed with the new encryption parameters each time a new encryption method is to be used).

As an overall summary of the stages which may take place to transfer a data item as a series of audio elements, the sending device (source encoding apparatus) and receiving device (decoding apparatus) may be configured to perform the following steps:

• Data input to the sending device

• Device handshake between the sending device and the receiving device resulting in unique parameters for the encryption and decryption algorithms

• Translation of data item into a TSSWD based on the generated encryption algorithm

• T ransmission of the TSSWD to the receiving device

• The receiving device recognises MFTg of transmission and records the incoming audio signal • The receiving device decodes and reconstructs the recorded TSSWD back to its original form using the decryption algorithm

The process of deriving or converting a series of data elements of a data element set into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, is described in relation to Figures 8 and 13.

This process assigns, for each unique data element (for example, the set of 26 data elements A-Z for each letter of the Roman alphabet, or the set of 256 colours used to provide a 2D image) a corresponding unique audio output frequency (e.g., the letter“A” has a frequency of 10kHz, the letter“B” has a frequency of 15kHz etc). Once each data element type has an assigned frequency to represent it as an audio element, then the sending and receiving devices can communicate to transfer the data item represented by the data elements (which can be converted into corresponding audio elements). The audio frequencies assigned to each data element may be set by the encryption procedure described above, or may be assigned using a predetermined look-up table.

Figures 8-12 illustrate a full procedure of data encryption into audio output elements, and transmission.

Figure 8 illustrates an example of converting a data item into a series of text data prior to converting to audio output elements. The data item (an image) 800 may be converted into a 3D decimal matrix of values 802, and/or into a series of Nx1x3 decimal matrices 804. The 3D matrix 802 is a numerical representation of the RGB colour image as it may normally be stored in the memory of a device. To identify a given number in this 3D space 802, a 3D coordinate system may be used to identify which layer, row, and column the number is located at. For example, number 236 on the top left corner can be represented as“236 [1 , 1 , 1 ]”, meaning that the value is‘236’ and it is found on the first layer, in the first row and in the first column. Concatenating, for example, one layer columns as a single long vector may be done and requires only a single additional item of information indicating after how many intervals the long vector should be cut to rearrange it once again as a 2D matrix. This can be helpful for transferring the data more efficiently without having to use an additional three digit coordinate for each data point. Each data point may be transferred as a sound frequency with only one additional item of information with two coordinates. For example, if the additional information is [50 40], then“50” may represent how many data points each column holds, and“40” may represent how many columns there are in each layer. That way, it may be easierfor the decoding apparatus to reconstruct the transferred data element back to its their original 3D format as data is received.

In some examples, the source encoding apparatus may be configured to analyse the complete data item to identify one or more data elements required to represent the complete data item. For example, the source encoding apparatus may determine that a text string comprises 17 unique characters, and determine an audio element set comprising 17 respective audio output frequency signals to represent the different data elements making up the data item.

Figure 9 schematically represents the device handshake procedure described in relation to Figure 3. The sending device 900 sends a sound wave signal comprising three audio tones at 10Hz indicating FTg, separated by time gaps TG of 0.01 ms, followed by a data class frequency tag of 15Hz indicating the data item type, to the receiving device 950

Once the receiving device 950 recognises the initialization FTg, data class and time gap from the sending device 900 (it detects the interference tone FTg, registers the interference tone FTg, registers the data class DC, and generates a random frequency limit of 12 Hz), it responds back to the sending device 900 with a sound wave signal 904 with the randomly generated frequency tag of 12 Hz (RFTg) and the same frequency tag of 10 Hz (FTg) as before. The receiving device 950 can then receive the third sound wave signal 906 from the sending device 900 which includes the same frequency tag (FTg) and the randomly generated frequency tag (RFTg), as an acknowledgement of receipt of the previous sound wave signal 904 and confirmation that the randomly generated frequency tag (RFTg) was correctly received.

Figure 10 schematically represents the data encryption definition step procedure described in relation to Figure 4. Once the initial connection between the sending device 900 and the receiving device 950 has been established, the following steps can be used to generate a way of encrypting the data item for transfer.

The receiving device 950 sends a sound wave signal 908 including a frequency tag MFTgi of 16Hz generated by a mathematical operation. MFTgi takes in RFTg generated during the initialization step 304 as an input along with the data class DC and performs a mathematical function to obtain MFTgi. The sending device 900 will receive this signal 908 and proceed to perform a mathematical operation taking MFTgi as input to obtain MFTg 2 that will determine the time series sound wave data (TSSWD) generation strategy (i.e. the audio output elements to use to represent the data elements). In this example MFTg 2 is an 18Hz signal. These parameters MFTg 2 are then shared 910 with receiving device 950 by the sending device 900. The receiving device 950 then confirms that the mathematical functions MFTg 2 have been received 912 using specific frequency sound waves FTg which are recognisable to both devices.

Figure 1 1 illustrates a data encryption process in which each data element 1100 (e.g. 0, 1 , 2...F) undergoes the mathematical function MFTg 2 1102 to provide the audio output elements 1 104. In this example the letter“A” is assigned a frequency of 130Hz in the assignment operation. This assignment process is performed for all the required characters/data elements in the data element set as shown in the generated table 1 106. An example of assigning audio output element frequencies to data elements in a 3D array (an Nx1x3 array) is also illustrated 1108. The table 1 106 is a generated look-up table indicating how each character in the hexadecimal system may be represented by an audio element of a particular frequency. The Nx1x3 matrix 1108 shows the data in a similar way to the data 804 shown in Figure 8, which may be represented by a frequency shown in the look up table 1106. Data 1 110 illustrates that each row of the Nx1 x3 matrix e.g.‘EC’ found on the first layer, first row, and first column has been represented by 170 Hz and 150 Hz frequency audio elements, using the look up table 1 106. One way of transferring an entire data item is by sending the audio sequence“170 Hz - TGi - 150 Hz” representing‘EC’ from the first row of the matrix 1108, then TG 2 , then“180Hz - TG 1 - 80Hz” representing‘F7’ from the second row of the matrix 1 108, then TG 2 , then,“170Hz - TG 1 - 170Hz” representing ΈE’ from the third row of the matrix 1108, then TG 2 etc. Of course, TG 1 and TG 2 need not necessarily be constant throughout transmission of the data item. Keeping constant time gap lengths may be more efficient than periodically/randomly changing the time gap length during encoding and/or transmission.

Figure 12 illustrates the step of data transmission similar to Figure 6. The upper portion shows a first example 1200 in which the frequencies assigned to each data element in the data element set do not change throughout data encoding and transmission. The lower portion shows a second example 1250 in which the frequencies assigned to each data element in the data element set change at different stages during data encoding and transmission. The FTg, TG, and frequencies assigned to data elements FQ1 , FQ2, FQ3 etc are changed periodically for increased encryption and data transfer security. Figures 13-14 illustrate a full procedure of data encryption into audio output elements, and transmission.

Figure 13 illustrates data organisation by converting the data item into a different datatype representation similarly to Figure 8. In Figure 13 a gene sequence 1300 is converted from a series of“G, A, T, C” elements into a binary vector 1302, and into an ASCII vector 1304. Different datatypes may represent the data item in different ways, and some data types may provide for a smaller audio element set to be required to represent the data item than other data types. Using a smaller audio element set may allow for quicker initialisation of the sending and receiving devices as there are fewer data element - audio output element assignments to register at the two devices, and it may allow for improved use of the available frequency space if fewer frequencies need to be used to represent the data item. The selected frequencies may also be spread further apart in frequency space (e.g. 10Hz, 60Hz, 1 10Hz to represent three data elements in a frequency space of 10-1 10 Hz compared with 10Hz, 20Hz, 30Hz...110Hz to represent 11 data elements in the same space) which may help provide clearer audio signal transmission if the audio output elements are more clearly differentiated from each other in frequency.

Figure 14 illustrates a data encryption process similarly to Figure 1 1 in which each data element 1400 (groups of 0s and 1 s in this example because the data item has been converted into binary) undergoes the mathematical function MFTg 2 1402 to provide the audio output elements 1404. In this example the symbol is assigned a frequency of 130Hz in the assignment operation. This assignment process is performed for all the required characters/data elements in the data element set as shown in the generated table 1406.

If built-in mathematical functions are used for encrypting the data elements into audio output elements, then these mathematical functions are built-in to the sending and receiving devices with the input parameters obtained during the device handshake procedure. FTg, CFTg, and TG are then built-in parameters for initiating, completing and encrypting transmission data, respectively. In some examples where no user specification of mathematical/encoding function is required, the source encoding device and decoding device may operate as a fully automated independent system. Lossless compression can be performed if necessary before the data encoding and transmission takes place so as to increase possible throughput. Depending on the trade-off between the compression time for data size reduction and the time required for transferring the compressed data (compared to the time required to transfer the non-compressed data) from the sending device to the receiving device, data compression may or may not be desirable (e.g. to reduce data transfer time). In some examples the source encoding apparatus may be configured to automatically determine whether data compression may be performed prior to data element conversion to audio elements, by determining the time required to compress the data item and convert the data elements of the compressed data item to audio elements, and determining the time required to convert the data elements of the non-compressed data item to audio elements, and comparing the two times to determine which data conversion process (with compression or without compression) allows for faster data transfer.

If the mathematical functions used for encrypting the data elements into audio output elements are user defined, then the mathematical functions are defined by the end user (operator) with input parameters obtained from the device handshake. In this case, FTg, CFTg and TG are user defined parameters for initiating, completing and encrypting transmission data, respectively. Again, lossless compression can be performed if necessary before the data encoding and transmission takes place so as to increase possible throughput.

Figure 15 shows the main steps of a method of using a source encoding apparatus. The method comprises: one or more of deriving or converting a series of data elements of a data element set, the data elements representing respective portions of a complete data item to be transmitted, into a corresponding series of audio output elements from an audio output element set, wherein each audio output element has a unique output frequency in the audio output set, the audio output set configured to comprise respective unique audio output elements which correspond to the complete range of discrete values available for use to represent the complete data item 1502; and providing the series of audio output elements for audio output 1504.

Figure 16 shows the main steps of a method of using a decoding apparatus. The method comprises: converting a series of audio input elements received as audio input into a corresponding series of data elements, the audio input elements being of an audio input element set and the data elements being of a data element set, and the data elements representing respective partial portions of a complete data item 1602, wherein each audio input element has a unique input frequency in the audio input set, the audio input set configured to comprise respective unique audio input elements which correspond to the complete range of discrete values available for use as data elements to represent the complete data item.

Figure 17 shows an example computer-readable medium comprising a computer program configured to perform, control or enable the methods of Figures 15 or 17 any method described herein. The computer program may comprise computer code configured to perform the method(s). In this example, the computer/processor readable medium 1700 is a disc such as a digital versatile disc (DVD) or a compact disc (CD). In other examples, the computer/processor readable medium 1700 may be any medium that has been programmed in such a way as to carry out an inventive function. The computer/processor readable medium 1700 may be a removable memory device such as a memory stick or memory card (SD, mini SD, micro SD or nano SD card).

It will be appreciated to the skilled reader that any mentioned apparatus/device and/or other features of particular mentioned apparatus/device may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state). The apparatus may comprise hardware circuitry and/or firmware. The apparatus may comprise software loaded onto memory. Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/functional units.

In some examples, a particular mentioned apparatus/device may be pre-programmed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a “key”, for example, to unlock/enable the software and its associated functionality. Advantages associated with such examples can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre-programmed software for functionality that may not be enabled by a user.

It will be appreciated that any mentioned apparatus/circuitry/elements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor. One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).

It will be appreciated that any“computer” described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some examples one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing elements may perform one or more functions described herein.

It will be appreciated that the term“signalling” may refer to one or more signals transmitted as a series of transmitted and/or received signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received simultaneously, in sequence, and/or such that they temporally overlap one another.

With reference to any discussion of any mentioned computer and/or processor and memory (e.g. including ROM, CD-ROM etc.), these may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), and/or other hardware components that have been programmed in such a way to carry out the inventive function.

The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole, in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that the disclosed examples may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the disclosure.

While there have been shown and described and pointed out fundamental novel features as applied to different examples thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognised that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or example may be incorporated in any other disclosed or described or suggested form or example as a general matter of design choice. Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus, although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.