Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND ARRANGEMENT FOR CAPTURING OF VOICE DURING A TELEPHONE CONFERENCE
Document Type and Number:
WIPO Patent Application WO/2007/053003
Kind Code:
A1
Abstract:
Method of and arrangement for associating voice clips with a document, the document being reviewed by at least one participant during a telephone conference set up between a convener communication set (T5, C5) and one or more participant communication sets (T1-T4, C1-C4), where the voice clips contain spoken comments by the participants. The voice clips associated with locations in a section of the document synchronized in time such that each voice clip is associated with a certain location in the section and contains speech that was spoken by the participant at the 10 time that the image of the section was shown to the participant.

Inventors:
NOLDUS ROGIER AUGUST CASPAR JO (NL)
DEN HARTOG JOS (NL)
HU YUN CHAO (NL)
MOERDIJK ARD-JAN (NL)
VAN DER MEER JAN (NL)
Application Number:
PCT/NL2005/050027
Publication Date:
May 10, 2007
Filing Date:
October 31, 2005
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
NOLDUS ROGIER AUGUST CASPAR JO (NL)
DEN HARTOG JOS (NL)
HU YUN CHAO (NL)
MOERDIJK ARD-JAN (NL)
VAN DER MEER JAN (NL)
International Classes:
H04M3/56; H04M7/00
Domestic Patent References:
WO2004014054A12004-02-12
WO2004014054A12004-02-12
Foreign References:
US5915091A1999-06-22
US5915091A1999-06-22
Attorney, Agent or Firm:
VAN WESTENBRUGGE, Andries (Postbus 29720, LS Den Haag, NL)
Download PDF:
Claims:

Claims

1. Communication system comprising a convener communication set for a convener of a telephone conference and at least one participant communication set, at least one processor (29; 30; 31) connected to at least one database (DBl; DB2; DB3) and at least one recorder (R1-R4) arranged to receive and record speech from at least one participant to said telephone conference, the convener communication set comprising a convener telephone (T5) and a convener computer arrangement (C5), each participant communication set comprising a participant telephone (Tl -T4) and a participant computer arrangement (C1-C4), the at least one processor (29; 30; 31) being arranged to:

- present an image of at least one section of a document to said convener computer arrangement (C5) and said participant computer arrangement (Cl- C4);

- instruct said at least one recorder (Rl -R4) to record speech of said at least one participant and store recorded speech in said at least one database (DBl; DB2; DB3);

- transform said recorded speech into one or more voice clips, and

- associate these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of said section was shown to the at least one participant.

2. Communication system according to claim 1, wherein the communication system comprises a phone conference system (PCS) connected to a voice database (DBl), a post processor (PP) connected to a post processor database (DB2) and a net meeting system (NMS) connected to a net meeting system database (DB3), the phone conference system (PCS) comprising a phone conference processor (31) and the net meeting system comprising a net meeting processor (29), said net meeting processor (29) being arranged to:

- present an image of at least one section of a document to said convener computer arrangement (C5) and said participant computer arrangement (Cl- C5);

said phone conference processor (31) being arranged to:

- instruct said at least one recorder (Rl -R4) to record speech of said at least one participant and store recorded speech in said voice database (DBl); said post processor being arranged to:

- receive said at least one section of said document and said recorded speech;

- transform said recorded speech into one or more voice clips, and

- associate these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of said section was shown to the at least one participant.

3. Communication system according to claim 1, wherein the communication system comprises a phone conference system (PCS) connected to a voice database (DBl) and a net meeting system (NMS) connected to a net meeting system database (DB3), the phone conference system (PCS) comprising a phone conference processor (31) and the net meeting system comprising a net meeting processor (29), said net meeting processor (29) being arranged to:

- present an image of at least one section of a document to said convener computer arrangement (C5) and said participant computer arrangement (Cl- C5); said phone conference processor (31) being arranged to:

- instruct said at least one recorder (Rl -R4) to record speech of said at least one participant and store recorded speech in said voice database (DBl); said net meeting processor (29) being also arranged to:

- receive said at least one section of said document and said recorded speech;

- transform said recorded speech into one or more voice clips, and

- associate these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of the section was shown to the at least one participant.

4. Communication system according to claim 2 or 3, wherein said at least one recorder (R1-R4) is arranged to start recording automatically when it receives a human voice signal.

5. Communication system according to any of the claims, 1, 2, or 3, wherein said at least one recorder (R1-R4) is arranged to record speech from each participant independent from recording speech from any other participant.

6. Communication system according to claim 2, wherein said net meeting processor is also arranged to register information as to a location of a cursor in said at least one section as a function of time and to send said information to said post processor, said post processor being arranged to use said information to associate said voice clips with locations in said at least one section.

7. Communication system according to claim 3, wherein said at least one recorder (R1-R4) is arranged to record speech of said at least one participant in the form of a series of speech blocks (SP(n)) per participant, and said phone conference system is arranged to send individual speech blocks (SP(n)) to said net meeting processor (29) each time a speech block (SP(n)) is recorded, said net meeting processor being arranged to include one speech block in each voice clip upon receipt of said one speech block (SP(n)).

8. Communication system according claim 6, wherein said phone conference system (PCS) is arranged to send the following information to said net meeting processor (29) regarding each speech block (SP(n)): an identification of a participant associated with said speech block (SP(n)), a number identifying said speech block (SP(n)), and time at which recording of said speech block (SP(n)) started.

9. Communication system according to claim 7 or 8, wherein said net meeting processor (29) is also arranged to register information as to a location of a cursor in said at least one section as a function of time and to use said information to associate said voice clips with locations in said at least one section.

10. Communication system according to any of the preceding claims, wherein said at least one processor (29; 30; 31) is arranged to generate a graphical presentation of attendance of each of the participants to the telephone conference.

11. Communication system according to any of the preceding claims, wherein said at least one processor (29; 30; 31) is arranged to produce data as to at least one of:

- average total speaking time per participant;

- total speaking time for each participant;

- average duration of said voice clips;

- an interruption rate indication per participant.

12. Processor system comprising at least one processor (29; 30; 31) connected to at least one database (DBl; DB2; DB3) and at least one recorder (R1-R4) arranged to receive and record speech from at least one participant to a telephone conference set up between a convener communication set and at least one participant communication set, said convener communication set comprising a convener telephone (T5) and a convener computer arrangement (C5), each participant communication set comprising a participant telephone (Tl -T4) and a participant computer arrangement (Cl -C4), the at least one processor (29; 30; 31) being arranged to:

- present an image of at least one section of a document to said convener computer arrangement (C5) and said participant computer arrangement (Cl- C4);

- instruct said at least one recorder (Rl -R4) to record speech of said at least one participant and store recorded speech in said at least one database (DBl; DB2; DB3);

- transform said recorded speech into one or more voice clips, and

- associate these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of said section was shown to at least one participant.

13. Method of associating voice clips with a document, said document being reviewed by at least one participant during a telephone conference set up between a

convener communication set and at least one participant communication set, said convener communication set comprising a convener telephone (T5) and a convener computer arrangement (C5), each participant communication set comprising a participant telephone (T1-T4) and a participant computer arrangement (Cl -C4), the method comprising:

- presenting an image of at least one section of said document to said convener computer arrangement (C5) and said participant computer arrangement (Cl- C4);

- recording speech of said at least one participant and storing recorded speech in at least one database (DBl; DB2; DB3);

- transforming said recorded speech into one or more voice clips, and

- associating these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of the section was shown to the at least one participant.

14. A digital document having voice clips associated therewith in accordance with the method as claimed in claim 13.

15. Computer program product comprising data and instructions arranged to provide a processor with the capacity to perform the method according to claim 13.

16. Data carrier storing a computer program product as claimed in claim 15.

*****

Description:

Method and arrangement for capturing of voice during a telephone conference.

Field of the invention

The invention relates to the field of telephone conferences set up in a communication system.

Background of the invention

Telephone conferences may be set up in communication systems between two or more telephones of participants to the telephone conference. The set up of such telephone conferences may be done in several different ways. E.g., there may be a convener of the telephone conference who sets up the conference by operating suitable key combinations on his telephone. Alternatively, a telephone company may have a service to set up such a conference where the initiative is with an operator of the telephone company. As a further alternative, a telephone conference may be set up by allowing a number of participants of which the telephone numbers are known to call in at a certain telephone number while using an access code, after which the conference is starting automatically. Such known ways of setting up a telephone conference may also be used in the present invention.

During such a telephone conference, the participants and, if present, also the convener may be shown an image of a section of a document on a monitor or display, which document is to be reviewed by all participants. Comments made by the participant can be noted by, e.g., the convener or another participant and put into the document at a later moment in time.

Summary of the invention

It is an object of the present invention to improve the way comments made by participants regarding a document shown to the participants during a telephone conference are associated with the document.

To that end, the invention provides a communication system comprising a convener communication set for a convener of a telephone conference and at least one participant communication set, at least one processor connected to at least one database and at least one recorder arranged to receive and record speech from at least one participant to said telephone conference, the convener communication set comprising a convener telephone and a convener computer arrangement, each

participant communication set comprising a participant telephone and a participant computer arrangement, the at least one processor being arranged to:

- present an image of at least one section of a document to said convener computer arrangement and said participant computer arrangement;

- instruct said at least one recorder to record speech of said at least one participant and store recorded speech in said at least one database;

- transform said recorded speech into one or more voice clips, and

- associate these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of said section was shown to the at least one participant.

In an embodiment, the invention is directed to a processor system comprising at least one processor connected to at least one database and at least one recorder arranged to receive and record speech from at least one participant to a telephone conference set up between a convener communication set and at least one participant communication set, said convener communication set comprising a convener telephone and a convener computer arrangement, each participant communication set comprising a participant telephone and a participant computer arrangement, the at least one processor being arranged to:

- present an image of at least one section of a document to said convener computer arrangement and said participant computer arrangement;

- instruct said at least one recorder to record speech of said at least one participant and store recorded speech in said at least one database;

- transform said recorded speech into one or more voice clips, and

- associate these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of said section was shown to the at least one participant.

In a further embodiment, the invention is directed to a method of associating voice clips with a document, the document being reviewed by at least one participant during a telephone conference set up between a convener communication set and at least one participant communication set, said convener communication set comprising a

convener telephone and a convener computer arrangement, each participant communication set comprising a participant telephone and a participant computer arrangement, the method comprising:

- presenting an image of at least one section of said document to said convener computer arrangement and said participant computer arrangement;

- recording speech of said at least one participant and storing recorded speech in at least one database;

- transforming said recorded speech into one or more voice clips, and

- associating these voice clips with locations in said at least one section synchronized in time such that each voice clip is associated with a certain location in said at least one section and contains speech that was spoken by said at least one participant at the time that the image of the section was shown to the at least one participant.

In a still further embodiment the invention is directed to a document produced by such a method.

In a still further embodiment the invention is directed to a computer program product comprising data and instructions arranged to provide a processor with the capacity to perform such a method.

Finally, in an embodiment, the invention is directed to a data carrier comprising such a computer program product.

An advantage of the present invention is that voice clips containing spoken comments made by the participants during the telephone conference are integrated in the document. Such voice clips may then be shown on a monitor together with an image of the document and can be played back. Such playing back can e.g. be done by putting a cursor on the voice clip on the monitor and, then, activating it by double clicking a mouse or the like.

Brief description of the drawings

The invention will be explained in detail with reference to some drawings that are only intended to show embodiments of the invention and not to limit the scope. The scope of the invention is defined in the annexed claims and by its technical equivalents.

The drawings show:

Figure 1 shows a communication network that can be used in the present invention;

Figure 2 shows a setup of a computer arrangement;

Figure 3 shows some functional details of a phone conference system, a post processor and a net meeting system that can be used in the present invention;

Figure 4 shows a timing diagram of recording of speech;

Figure 5 shows a diagram of voice capturing of a plurality of participants;

Figures 6A-6B show flow charts of the functionality of the communication system shown in figure 1 in several different embodiments;

Figure 7 shows an example of a text document having voice clips added to it.

Detailed description of embodiments

In figure 1, an overview is given of a communication system in which the present invention can be used. Figure 1 shows a plurality of telephones Tl, T2, T3, T4, which are connected to a communication network N2. It is observed that the number of telephones T1-T4 is shown to be equal to four but that is only intended as an example. Any arbitrary number of telephones may be used.

Figure 1 also shows a number of computer arrangements Cl, C2, C3, C4. Again, the number of computer arrangements shown is not intended to limit the invention. Any arbitrary number of computer arrangements may be used.

In general, there will be one communication set of a telephone and computer arrangement Tl/Cl, T2/C2, T3/C3, T4/C4 for each participant to a telephone conference. However, one such communication set may be used by two or more people at the same time.

It is observed that figure 1 shows non-mobile telephones T1-T4 with fixed lines to the communication network N2. However, as will be understood by a person skilled in the art, the telephones T1-T4 may be implemented as mobile telephones or, alternatively, as telephones with a wireless receiver. Moreover, any kind of local switch may be applied between the telephones T1-T4 and the communication network N2. In a further alternative, the telephones T1-T4 and the computer arrangements C1-C4 may be combined in one single apparatus. I.e., each participant may have one apparatus that has both the iunctionality of a telephone and of a computer arrangement.

The computer arrangements C1-C4 are shown as stand alone devices, e.g., personal desktop computers or personal laptop computers, however, they may be implemented as terminals connected to a server, or the like, where the server provides the main iunctionalities required.

As shown in figure 1, the computer arrangements C1-C4 are connected to a further communication network Nl. The communication network Nl may be any communication network arranged to transmit digital signals, like the Internet. However, it is emphasized that, in an embodiment, the communication networks Nl, N2 are one and the same communication network that is arranged to transmit both analog and digital signals. Moreover, the communication networks Nl, N2 may be one single digital communication network arranged to transmit both the voice signals received from the telephones T1-T4 and the signals received from the computer arrangements C1-C4. Of course, then, the analog signals as received by the telephones T1-T4 from the participants should be converted from the analog to the digital domain before being transmitted via the digital communication network. Voice over IP (VoIP) is an example of how voice signals and digital data can be transmitted through one single digital communication network.

Figure 1 also shows a communication set of a telephone T5 and a computer arrangement C5. The telephone T5 is connected to the communication network N2, whereas the computer arrangement C5 is connected to the communication network Nl. For the communication set comprising the telephone T5 and the computer arrangement C5 the same remarks can be made as for the other combinations of telephones T1-T4 and computer arrangements C1-C4.

A phone conference system PCS, a post processor PP and a net meeting system NMS are also shown in figure 1. The phone conference system PCS is, in the embodiment of figure 1, connected both to the communication network N2 and the communication network Nl. Also, the post processor PP is connected to both the communication network N2 and the communication network Nl. The net meeting system NMS is shown to be connected to the communication network Nl. The phone conference system PCS has a voice database DBl. The post processor has a post processor database DB2. The net meeting system NMS has a net meeting database DB3.

It is observed that the phone conference system PCS, the post processor PP and the net meeting system NMS may be partly or in whole implemented in one single server having only one single processor. Similarly, the databases DBl, DB2, DB3 may be implemented as a single database. Figure 1 shows but one way of implementing the functionality of the present invention, as further explained hereinafter with reference to figures 6A-6D.

In figure 2, an overview is given of a computer arrangement that can be used to implement any of the computer arrangements C1-C5. The computer arrangement comprises a processor 1 for carrying out arithmetic operations. The processor 1 is connected to a plurality of memory components, including a hard disk 5, Read Only Memory (ROM) 7, Electrically Erasable Programmable Read Only Memory (EEPROM) 9, and Random Access Memory (RAM) 11. Not all of these memory types need necessarily be provided. Moreover, these memory components need not be located physically close to the processor 1 but may be located remote from the processor 1.

The processor 1 is also connected to means for inputting instructions, data etc. by a user, like a keyboard 13, and a mouse 15. Other input means, such as a touch screen, a track ball and/or a voice converter, known to persons skilled in the art may be provided too.

A reading unit 17 connected to the processor 1 is provided. The reading unit 17 is arranged to read data from and possibly write data on a data carrier like a floppy disk 19 or a CD-ROM 21. Other data carriers may be tapes, DVD, etc. as is known to persons skilled in the art.

The processor 1 is also connected to a printer 23 for printing output data on paper, as well as to a display 3, for instance, a monitor or LCD (Liquid Crystal Display) screen, or any other type of display known to persons skilled in the art.

The processor 1 is connected to communication network Nl by means of I/O (input/output) means 25. The processor 1 may be arranged to communicate with other communication arrangements through communication network Nl.

The processor 1 may be implemented as stand alone system, or as a plurality of parallel operating processors each arranged to carry out subtasks of a larger computer program, or as one or more main processors with several sub processors. Parts of the functionality of the invention may even be carried out by remote processors communicating with processor 1 through the communication network Nl .

It is observed that the setup of the computer arrangement shown in figure 2 may, essentially, also be used to implement the phone conference system PCS, the post processor PP and the net meeting system NMS. However, it is also observed that the phone conference system PCS, the post processor PP and the net meeting system NMS need not necessarily comprise all of the components shown in figure 2. It is only essential that they have some kind of processor and some kind of memory where the memory stores instructions and data, such that the processor can perform its task in

accordance with the present invention. These tasks will be explained in detail hereinafter.

However, before explaining these tasks in detail, first of all reference is made to figure 3. Figure 3 shows examples of the phone conference system PCS, the post processor PP and the net meeting system NMS.

As shown, the phone conference system PCS may comprise a plurality of recorders Rl, R2, R3, R4. Again, the number of recorders shown (i.e., four) is not intended to limit the present invention. This number is shown only by way of example. Each of the recorders R1-R4 is shown to be connected to a line Ll, L2, L3, L4, which is connected to the communication network N2. These lines L1-L4 may be implemented as four separate physical lines. However, as persons skilled in the art will know, there may be just one physical line arranged to transport a plurality of signals, one signal per recorder R1-R4, e.g., in the form of any known multiplexing technique. In the same way, the recorders R1-R4 can be implemented as one single recorder arranged to receive such multiplexed speech signals and to demultiplex them before recording.

The lines L1-L4 need not be physical lines: they may be implemented as one or more wireless connections. This statement also holds for any other communication connection shown in any of the figures.

Each one of the recorders R1-R4 is connected to a phone conference processor 31. The phone conference processor 31 is arranged to perform the task of a computer program as instructed by instructions and data stored in a memory (not shown).

The recorders R1-R4 are also connected to the voice database DBl.

Together, the recorders R1-R4 may be located before a conference bridge. A conference bridge is established in an exchange of a telephone network for the duration of a telephone conference and is known from the prior art. Such conference bridges can be easily implemented in circuit switched networks. Logging in to such a conference bridge may be done by calling a predetermined telephone number followed by an identification code that identifies that conference bridge. Implementing recorders R1-R4 before such a conference bridge may use techniques as used when tapping telephone lines, e.g. for lawful interceptions. Requirements specifications and implementations for lawful interception may be found in: for GSM: 3GPP standard TS 41.033, 42.033 and 43.033, and for UMTS: 3GPP standard TS 33.106 and 33.107.

Figure 3 shows that the post processor PP comprises a processor 30. This processor 30 is arranged to perform specific tasks as instructed by a computer program that is stored in the form of instructions and data in a memory (not shown in figure 3).

Figure 3 shows that the net meeting system NMS comprises a net meeting processor 29. This net meeting processor 29 is arranged to perform specific tasks as instructed by a computer program that is stored in the form of instructions and data in a memory (not shown in figure 3). The processors 29, 30 and 31 are arranged to communicate with one another via the communication network Nl . Of course, other arrangements for communicating between the processors 29, 30 and 31 can be made without departing from the scope of the present invention.

The setup of the system shown in figures 1, 2 and 3 is advantageously be used for telephone conferences. During such a telephone conference one or more participants use a communication set of one telephone T1-T4 and one computer arrangement C1-C4. These participants may be located within one and the same building. However, the participants may also be located a long distance away from one another.

In accordance with the present invention, the system is used to review one document with a plurality of participants and store comments in the form of voice contributions of the different participants as to the content of the document, e.g., in voice database DBl.

It is assumed that there is one convener of the telephone conference who is operating the communication set of the telephone T5 and the computer arrangement C5. Of course, the convener may be a participant to the telephone conference himself. Thus, the term "convener" is only used to identify a specific role to be played by one of the participants. By operating the input devices of his computer arrangement, like a keyboard or a mouse, the convener arranges for images of (sections of) the document to be shown on any of the monitors of the computer arrangements C1-C4. The convener decides which section of the document is shown on the monitors of the computer arrangements C1-C5 at which moment in time.

In general terms, the concept of the present invention is as follows.

The participants review the document as shown on their monitors section by section. At the same time that the participants are viewing these sections of the document on their respective monitors, their telephones T1-T4 have established a voice connection with the phone conference system PCS. This connection to the phone conference system PCS during a telephone conference may be made by any method

known in the art. For instance, many telephone companies offer the option of setting up such a telephone conference by an operator of the telephone company. Alternatively, the convener may use his telephone T5 to set up a telephone conference in a way known from the art.

In a preferred embodiment, the participants are registered beforehand with their respective telephone numbers such that the system stores their Calling Line Identification (CLI), e.g., in database DBl. The CLI is used to link each participant with a telephone line.

To each section of the document shown on the monitors of the computer arrangements C1-C5, the participants of the meeting may provide comments. Each participant's comment on a particular section is captured in audio format and is stored such that it can be added to the document as an audio component (e.g., as a voice clip or audio object). The audio component is placed in the document as an audio comment to a particular section, paragraph, figure, etc.

In an embodiment, the meeting convener decides when a particular participant's speech should be captured and embedded in the document. Hence, whenever one of the meeting participants is "given the floor" (i.e., starts talking), in this embodiment, the convener activates the voice capture mode. When the convener wants to capture voice of certain participant associated with line Ll, then his computer arrangement C5 translates that into an instruction towards phone conference system PCS to record voice received on line Ll.

The phone conference system PCS receives the speech from all participants. Figure 4 shows an example of a speech contribution of one of the participants. Figure 4 shows the speech contribution of one of the participants as a function of time t in the form of a plurality of speech blocks SB(I), SB(2), ..., SB(n), ...SB(N). So, the speech of the participant shown in figure 4 is characterized by silent periods between speech blocks SB(n). The phone conference system PCS receives these speech blocks SB(n) and stores them in the voice database DBl. In order to allow the proper association between the stored voice blocks SB(n) and the document, each of the voice blocks SB(n) may be stored together with a time stamp, as well as an identification of the participant. This time stamp is synchronized with the time where the cursor was located in the section of the document as shown to all participants on their respective monitors. Instead of a cursor, any other kind of identification of the section shown on the monitors may be used. In an embodiment, that identification of the section shown is included in the

speech files as stored in the voice database DBl. Thus, each speech file of any of the participants contains the speech of the participant, the time stamp and the section identification. In an embodiment, after the meeting, the speech files may be embedded in the document by the post processor PP. The post processor PP can produce a "presence graph". The post processor PP then uses the pre-registered information (participant name and CLI) to add a name to the presence graphs. The post processor PP uses the output produced by the phone conference system PCS to produce the graphs. It uses the names to give meaning to the graph. When generating the marked-up document, the post processor PP also needs the names to put these in "text balloons" in the document.

Now, a detailed description of the functionality of the phone conference system PCS, the post processor PP and the net meeting systems NMS will be given for four different embodiments.

In a first embodiment that will be explained with reference to figure 6A, the convener who is operating the telephone T5 and the computer arrangement C5 is in full control over the voice capturing process.

In the second embodiment the convener is still in full control of the voice capturing process. However, in this embodiment the functionality of the post processor PP is implemented in the net meeting system NMS.

In the third embodiment, the convener is not in full control anymore of the voice capturing process. The voice capturing is performed more or less fully automatically, as will be explained with reference to figure 6C. In the fourth embodiment, that is explained with reference to figure 6D, again, the voice capturing process is done more or less automatically. Moreover, the functionality of the post processor PP is implemented in the net meeting system NMS.

Embodiment 1

In this first embodiment, the convener is operating the communication set comprising the telephone T5 and the computer arrangement C5. The convener is in full control of the document review process. The convener determines the section in the document that is to be reviewed by all of the participants. The participants have access to the system by means of the plurality of communication sets comprising a telephone T1-T4 and a computer arrangement C1-C4.

As shown in figure 6A, a first action 6Al is that the net meeting system NMS receives instructions from the computer arrangement C5 of the convener to present an image of the document to the computer arrangements C1-C4 of the participants. This will be done by showing such an image of the document on the monitors of the computer arrangements C1-C4. The convener, by means of his computer arrangement C5, is in control of which section of the document is shown to the participants on their respective monitors. The convener also decides about the position of the cursor in the document.

All participants and the convener may speak to one another by means of their telephones T1-T5. E.g., the convener invites one of the participants to provide a comment to the section of the document as shown on the monitor. The net meeting system NMS provides the convener with a menu on his monitor (e.g. in the form of selection buttons that should be clicked or double-clicked by a mouse), which allows the convener to select one or more voice contributions of one or more of the participants to be captured by the phone conference system PCS. To that end, the net meeting system NMS receives a voice capturing instruction from the computer arrangement C5 of the convener for one or more participants, as shown in action 6A3. In action 6A5, the net meeting system NMS sends an instruction to the phone conference system PCS to start capturing the voice of one or more of the participants. Such an instruction comprises one or more of the following information:

- the identity of the telephone T1-T4 from which the voice should be captured. For this purpose, the calling line identification (CLI) could be used. This will be used by the phone conference system PCS to capture the appropriate participant's voice.

- the current location of the cursor in the document as shown on the monitors of the computer arrangements C1-C5. This will be used by the post processor PP, at a later moment in time, to combine the captured voice with the appropriate location in the document.

- the time of capturing the voice: this can be used by the post processor PP to allow an indication of the moment in time that comments were made to be read from the system.

- the name of the participant whose voice should be captured. This can be used by the post processor PP, so that, after a post processing action, the name of the person whose voice is captured can be placed in the document, together

with the comment given. It is observed that providing the name of a participant is not strictly necessary. Sometimes, one can rely on the CLI to derive the name of the participant. The post-processing system can add the name, when generating the marked-up document. However, if one phone connection is shared by two or more persons (e.g. loudspeaker phone), then it may indeed be needed to provide the name of the speaker, as the CLI will in that case not be enough.

In action 6A7, the phone conference system PCS receives this instruction from the net meeting system NMS to start capturing the voice from one or more of the participants.

After having received that instruction, in action 6A9, the phone conference system PCS starts capturing the voice from the identified one or more participants and stores the captured voice from the one or more participants in the form of audio files in the voice database DBl.

At a certain moment in time, the convener decides to stop the telephone conference or to change the voice capturing from one or more of the participants to one or more other participants. In action 6Al 3, the net meeting system NMS receives an instruction to that effect from the computer arrangement C5 of the convener. The net meeting system NMS converts such an instruction to an instruction for the phone conference system PCS to stop capturing the voice entirely or to change to one or more other participants. This instruction is received, in action 6Al 1, by the phone conference system PCS. After having received that instruction, the phone conference system PCS stops capturing the voice entirely or changes to voice capturing of other participants.

After having stopped the voice capturing entirely, the phone conference system PCS reads the stored audio files from the voice database DBl and sends them to the post processor PP, as indicated in action 6Al 7.

After the telephone conference has been ended, the net meeting system NMS sends the original document to the post processor PP. Together with the document, the net meeting system NMS sends time data as to at which location the cursor was in the document as a function of time to the post processor PP. This is indicated in action 6Al 5.

Action 6Al 9 indicates that the post processor PP receives the original document from the net meeting system NMS and the audio files from the phone conference system PCS.

After having received the original document and the audio files, the post processor PP transforms the audio files into voice clips and associates them with locations in the document to render a processed document. This is done in action 6A21. In action 6A21, it is indicated that this association is based on the time data indicating where the cursor was in the document at which moment in time and when the respective audio files were captured by the phone conference system PCS. The post processor PP stores the document, together with the audio files associated therewith in the post processor database DB2. Of course, a copy of that processed document can be sent back to the computer arrangement C5 of the convener. However, it can also remain in the post processor database DB2 only. As a further alternative, it can be stored in any other memory.

Instead of using the time data as indicated in action 6A21, the post processor PP can also use other information to associate the audio files to the correct location in the original document, e.g., the information as to the place where the cursor was located in the document at the time that the voice capturing process started as instructed by the convener.

Preferably, arrangements may be made for the participants to get access to the processed document as stored in the post processor database DB2.

Embodiment 2

In the second embodiment, the convener is still in full control of the document review process. However, in the second embodiment, the post processor PP is not used. Figure 6B shows a flow chart that illustrates the second embodiment.

The actions 6Bl, 6B3 and 6B5, respectively, are the same as the actions 6Al, 6 A3 and 6A5, respectively.

The actions 6B7, 6B9, and 6Bl 1, respectively, are the same as the actions 6A7, 6A9 and 6Al 1, respectively.

Action 6Bl 3 is the same as action 6Al 3.

The actions from figure 6B that are equal to actions in figure 6A are not explained again.

Action 6Bl 7 differs from action 6Al 7 in the sense that the phone conference system PCS does not send the audio files as captured and stored in the voice database DBl to the post processor PP but to the net meeting system NMS. In action 6Bl 5, the net meeting system NMS receives the audio files from the phone conference system PCS.

After having received these audio files from the phone conference system PCS, the net meeting system NMS transforms the received audio files into audio clips and associates them with locations in the document based on the time data, as shown in action 6Bl 9. Action 6Bl 9 is equal to action 6A21 be it that this action is now performed by the net meeting system NMS instead of by the post processor PP. Action 6Bl 9 will, thus, not be explained in detail again, but it should be understood that it comprises alternatives as explained with reference to action 6A21. After the transformation, the net meeting system NMS produces a processed document in which voice clips are added to the original document. This processed document is stored in the net meeting system database DB3 (or any other database).

In an embodiment of figure 6B, the phone conference system PCS may be programmed to send an audio file to the net meeting system NMS, each time that the phone conference system PCS has recorded a speech block as shown in figure 4. Then, the net meeting system NMS is programmed to associate the received speech block as a voice clip with the location where the cursor is in the document at the time of receiving the speech block. This has the advantage that, while the voice of the participant is captured by the phone conference system PCS, the convener may edit the document without running the risk that the post processor PP is no longer able to tie the voice clip to a particular place in the document, due to the fact that the document as been modified during the meeting.

Embodiment 3

As an enhancement to the method described above, the following method may be considered. This is explained with reference to the embodiments 3 and 4. In the embodiments 3 and 4, instead of having the convener (or one of the participants) control when a participant's speech shall be captured by the phone conference system PCS, automatic voice recognition may be applied for starting the voice capturing process. This may be implemented by the embodiment shown in figure 3, where the recorders R1-R4 are, then, arranged to automatically detect voice on the respective

input lines Ll -LA Then, the respective recorder R1-R4 will automatically start recording when it detects human speech. In this way, neither the convener, nor any of the participants needs to be concerned about the voice capturing process. This is done fully automatically, whenever one or more of the participants speak. For such speech detection, existing technology may be used. Speech detection is, e.g., used in GSM TDMA Radio Access Network, for the purpose of Discontinues Transmission (DTX). Reference is made to the following standards: 3 GPP TS 46.031 and 3GPP TS 46.032. However, other known (or still to be developed) speech detection mechanisms may be used within the scope of the present invention.

The recorders R1-R4 may send their respective outputs via individual lines to the voice database DBl, where the captured voices are stored independently from one another. However, alternatively, the voices recorded by the recorders R1-R4 on the same time may be stored as a combined audio file in the voice database DBl. However, if this is done, it may be difficult to distinguish between the different contributions of the different participants at a certain moment in time.

Adjacent to each one of the lines L1-L4, there is shown an arrow with a solid line. These arrows with solid lines refer to the participant's speech that is transferred to the respective recorder R1-R4. Moreover, adjacent to each one of the lines L1-L4, there is shown an arrow with interrupted line. The arrows with an interrupted line refer to all speech of all participants, excluding the speech of the participant connected to the line concerned, that is fed back to the receiver of the respective telephones T 1-T5.

Figure 6C shows the functions of the different systems in this third embodiment.

In action 6Cl, the net meeting system NMS receives instructions from the computer arrangement C5 of the convener to present an image of a document to the computer arrangements C1-C4 of the participants. Moreover, this document will be shown on the monitor of the computer arrangement C5 of the convener.

After that, the net meeting system NMS, as shown in action 6C3, will send an instruction to the phone conference system PCS to start an automatic voice capturing process.

Action 6C5 shows that the phone conference system PCS receives the instruction to start the voice capturing process and starts that process.

In action 6C7, the phone conference system PCS detects whether one or more participants start talking. This is done by the recorders R1-R4, as explained above. If one or more of the recorders R1-R4 detect the presence of voice on the respective input line L1-L4, the recorder R1-R4 starts the voice capturing. The captured voice from the one or more participants is stored by the phone conference system PCS in the voice database DBl in the form of audio files.

The voice as detected and stored will typically have the form as shown in figure 4. I.e., the registered voice will be stored in the form of speech blocks SB(n) as a function of time. The phone conference system PCS (i.e., the phone conference processor 31) informs the net meeting system NMS that voice capturing for one or more lines L1-L4 has started. To that end the phone conference system PCS sends the following information to the net meeting system NMS: the line numbers of the lines L1-L4 from which speech is recorded; the number n of the speech block SP(n) concerned; starting time of the recording. This is indicated with reference to 6C9 in figure 6C.

In action 6Cl 1, the net meeting system NMS receives this information, i.e., the line number, the speech block number and the time data, from the phone conference system PCS. The net meeting system NMS uses this information to generate a so-called "correlation file". This correlation file contains this information, as well as an indication of the location of the cursor in the document under review at the time that the voice capturing was generated. This correlation file is stored in the net meeting system database DB3.

In action 6Cl 3, the phone conference system PCS stores the captured voice from the one or more participants in the form of audio files in the voice database DBl.

In action 6Cl 5, the net meeting system NMS, as instructed by the convener of the telephone conference, sends an instruction to the phone conference system PCS to stop the voice capturing process.

In action 6Cl 7, the phone conference system PCS receives the instructions to stop the voice capturing process and stops the process.

In action 6C19, the phone conference system PCS sends the audio files as registered in the voice database DBl to the post processor PP.

In action 6C21, the net meeting system NMS sends the original document, the time data as to where the cursor was in the document at which moment in time to post processor PP, as well as the correlation file generated in action 6Cl 1.

In action 6C23, the post processor PP receives the audio files from the phone conference system PCS, as well as the original document, the time data and the correlation file from the net meeting system NMS.

In action 6C25, finally, the post processor PP transforms the audio files into voice clips and associated them with appropriate locations in the document based on the time data and the correlation file. The result is a post processed document that is stored in the post processor database DB2. The post processed document may be sent to, e.g., the computer arrangement C5 of the convener, or to any of the other computer arrangements C1-C4. Alternatively, or in addition to that, any of the participants and/or the convener may have access to the post processed document as stored in the post processor database DB2.

Embodiment 4

The fourth embodiment of the present invention is explained with reference to figure 6D. Like in the setup of the embodiment of figure 6C, the convener does not have full control over the capturing process. Moreover, in the embodiment of figure 6D, the post processor PP has no function. The functionality of the post processor PP as shown in the embodiment of figure 6C is taken care of by the net meeting system NMS.

The actions 6Dl, 6D3, 6Dl 1 and 6Dl 5, respectively, are the same as the action 6Cl, 6C3, 6Cl 1 and 6Cl 5, respectively, as explained with reference to figure 6C.

Moreover, the actions 6D5, 6D7, 6D9, 6D13 and 6D17, respectively, are the same as the action 6C5, 6C7, 6C9, 6C13 and 6C17, respectively, of the embodiment according to figure 6C.

Action 6Dl 9 differs from action 6Cl 9, in the sense that the audio files as stored in the voice database DBl are not sent to the post processor PP, but to the net meeting system NMS. In action 6D21, the net meeting system NMS receives these audio files from the phone conference system PCS.

In action 6D23, the net meeting system transforms these received audio files into voice clips and associated these voice clips with the appropriate locations in the

document based on time data and the correlation file. This results is a processed document that is stored in the net meeting system database DB3. This processed document may be accessible to the convener and the other participants via the computer arrangements C1-C5 via one or more of the communication networks Nl, N2.

The advantages of the embodiment according to figure 6D are similar to the advantages of the second embodiment, as explained with reference to figure 6B. These advantages will not be repeated here.

As in the embodiment according to figure 6B, the phone conference system PCS does not need to postpone sending audio files to the net meeting system NMS until the telephone conference has completely ended. Contrary to that, it may be advantageous to send each speech block SB(n) or some of the speech blocks together, to the net meeting system NMS before the telephone conference has ended. That provides the net meeting system NMS with the option to associate the audio files, in the form of a voice clip, with the appropriate locations in the document at the time the cursor is still at the appropriate location in the document. This allows for changes to be made in the document by, e.g., the convener at the time of the telephone conference.

Output

After the telephone conference, a processed document will be available comprising the original text (and/or other information in the document) and audio information in the form of voice clips. An example of a section of such a document is shown in figure 7, where the voice clips are shown in the form of so-called call-outs. Besides the actual voice components, these call-outs may comprise an indication of the time a comment was made by one of the participants, the name of the participant who made the comment, etc. When such a section of a document is shown on a monitor of one of the computer arrangements C1-C5 the user of such computer arrangement C1-C5 is able to listen to the voice clip, e.g., by clicking or double- clicking with, e.g., his mouse when the cursor is located on such a call-out. After such clicking or double-clicking (or any other suitable action), the processor 1 of the computer arrangement will play back the content of the voice clip via the loudspeaker 2 (cf. figure 2).

The content of such a call-out may relate to the voice captured from one of the participants only. However, the content of such a call-out may also relate to a plurality of participants, as explained above. As a further alternative, a user of one of the computer arrangements C1-C5, may be provided with the option to select one or more of the voice clips of one or more of the participants, excluding his own voice clips.

In a further alternative, the users of the computer arrangements C1-C5 may be presented with the option to play back all contributions of all participants and the convener during a time span to be specified by the user.

Presenting these options to the users of the computer arrangements C1-C5 may be controlled by the post processor PP (embodiments according to figures 6A and 6C) or the net meeting system NMS (embodiments of figures 6C and 6D).

A further output of the system as explained above may be a graphical presentation of the attendance of the participants to the meeting. Such a graphical presentation as shown by way of example in figure 5. Figure 5 shows that several lines L1-L5 were connected to the phone conference system PCS during the telephone conference. It is observed, that figure 5 shows five lines whereas, in figure 3 there were shown four lines. The fifth line L5 may, e.g., be the line connected to the telephone T5 of the convener whose voice contributions can also be captured.

Figure 5 does not show so much when the participants or the convener have been speaking but shows when there was a live connection between the phone conference system PCS and the respective telephones T1-T5. This is indicated with a solid line in figure 5. The time span chosen is ninety minutes in figure 5. However, any other example may be used instead. The diagram shown in figure 5 is, in one embodiment, produced by the phone conference system PCS, which stores data to that effect in a memory, which may be a portion of the voice database DBl. The data of that diagram may be accessed by all of the participants and the convener.

As can be seen in figure 5, participant 2 joined the telephone conference approximately five minutes after the start of the telephone conference and left the telephone conference approximately eight minutes prior to the closing of the telephone conference. Similar indications apply for the participants 3 and 5. This method requires that the meeting participants register for the telephone conference with a particular calling line, i.e., a telephone number used to call in to the telephone conference. The phone conference system PCS can then link the respective recorders

R1-R4 to the respective telephones T1-T5 using Calling Line Identification (CLI), associated with these telephones T1-T5.

Based on the speech blocks SB(n) as stored by a phone conference system PCS in the voice database DBl per participant, as well as for the convener, the phone conference system PCS, or any other system using the data, may produce the following statistics:

- average total speaking time per participant during the telephone conference;

- total speaking time per participant during the telephone conference;

- average duration of a speech block SB(n) per participant;

- interruption rate, i.e., which is either an indication of how often and how persistently a particular participant interrupted other participants of the telephone conference or how often a particular participant was interrupted by other participants.

Of course, other statistical information may be derived from the information stored in the voice database DBl. These statistics may be used for research projects, e.g., to study effectiveness and human behaviour when using web conferencing tools.

The method of the present invention may be implemented by one or more processors of a processor system as controlled by a computer program. That computer program may be stored on a data carrier like a DVD, a CD-ROM, or the like.