Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VIRTUAL LINE-IN
Document Type and Number:
WIPO Patent Application WO/2019/222418
Kind Code:
A2
Abstract:
Example systems and techniques disclosed herein facilitate interoperability between different media playback systems referred to herein as a virtual line-in (VLI) media playback system and a native playback system. When a VLI session is created by a VLI sender, a first native playback device can join a VLI group as a VLI receiver. As a VLI receiver, the first native playback device receives audio content and playback commands from the VLI sender to facilitate synchronous playback with other VLI receivers. At the same time, this native playback device can concurrently operate as a native domain group coordinator of a native domain synchrony group. As the native domain group coordinator, the native playback device translates VLI domain audio, control, and timing signals into the native domain and distributes such signals to native domain group members. In this way, the native domain group members can synchronize their playback with the VLI group.

Inventors:
COBURN IV (US)
KOTOWSKI CHRIS (US)
FARUQUE ARVIN (US)
Application Number:
PCT/US2019/032514
Publication Date:
November 21, 2019
Filing Date:
May 15, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONOS INC (US)
International Classes:
H04N21/436
Foreign References:
US8234395B22012-07-31
Attorney, Agent or Firm:
URBAN, Benjamin, M. (US)
Download PDF:
Claims:
CLAIMS

We claim:

1. A method, comprising:

receiving, by a first playback device (102) of a first playback system via a network interface from a first device (104) of a second playback system, a second playback system audio stream encoded in a second encoding format for playback by the second media playback system,

generating, by the first playback device (102), based on the received second playback system audio stream, a first playback system audio stream encoded in a first encoding format for playback by the first playback system;

decoding, by the first playback device (102), the converted first playback system audio stream; and

playing back, by the first playback device (102) the decoded first playback system audio signal.

2. The method of claim 1, further comprising:

receiving, by the first playback device (102) from a timing server (869) of the second playback system, a second playback system clock signal in a second format suitable for processing by devices of the second playback system, wherein playback devices (102) of the first playback system are configured for exchanging control and timing information in a first format that is different from the second format;

deriving, by the first playback device (102), a virtual clock signal synchronized to the received second playback system clock signal based on:

a device clock of the first playback device clock; and

the received second playback system clock signal; and

providing, by the first playback device (102) to a first playback system timing server (865e), the derived virtual clock signal for use by the first playback system timing server (865) as a first playback system clock signal;

wherein the first playback device (102) decodes the first playback system audio stream at a rate set by the first playback system clock signal.

3. The method of claim 2, wherein at least one of: the first playback device timing server (865) is comprised by the first playback device (102); and

the timing server (869) of the second playback system is comprised by one of the first playback device (102) of the first playback system and the first device (104) of the second playback system.

4. The method of any preceding claim, wherein the first encoding format and the second encoding format are the same.

5. The method of any preceding claim,, wherein the first encoding format and the second encoding format are different.

6. The method of any of claims 2 to 5, further comprising:

distributing, by the first playback device (102) to one or more members of a first playback system synchrony group that exchanges timing and control information according to the first format:

the first playback system clock signal; and

the first playback system audio stream; and

receiving, from the first playback device (102) by a second playback device (102) of the first playback system that is a member of the first playback system synchrony group: the first playback system clock signal; and

the first playback system audio stream; and

decoding, by the second playback device (102), the first playback system audio stream at a rate set by the first playback system clock signal; and

playing back the first playback system audio stream in synchrony with the first playback device (102).

7. The method of any of claims 2 to 6, wherein the virtual clock signal is derived by the first playback device (102) by:

generating a skew signal representing error between the device clock of the first playback device (102) and the second playback system clock signal; and

generating the virtual clock signal based on the generated skew signal and the second playback system clock signal.

8. The method of any of claims 2 to 7, wherein decoding the first playback system audio stream at a rate set by the first playback system clock signal comprises at least one of:

disciplining a digital-to-audio converter clock (860) to the first playback system clock signal; and

inserting or removing samples from the first playback system audio stream using the first playback system clock signal to cause a playback rate of the first playback device (102) to match the second playback system clock signal.

9. The method of any of claims 2 to 8, further comprising, in response to the the second playback system initiating playback, switching from providing, by the first playback device (102) to the first playback system timing server (865), the device clock of the first playback device (102) to providing the derived virtual clock signal.

10. The method of claim 9 in combination with claim 5, further comprising switching from providing, to the one or more members of the first playback system synchrony group, the device clock of the first playback device (102) to providing the derived virtual clock signal.

11. The method of any of claims 2 to 10, further comprising:

receiving, by a second playback device (102) of the first playback system, via a network interface from a second device (104) of a second playback system, a second playback system audio stream encoded in a second encoding format for playback by the second media playback system,

generating, by the second playback device (102), based on the received second playback system audio stream, a first playback system audio stream encoded in a first encoding format for playback by the first playback system;

receiving, by the second playback device (102) from a timing server (869) of the second playback system, a second playback system clock signal;

deriving, by the second playback device (102), a virtual clock signal synchronized to the received second playback system clock signal based on:

the device clock of the second playback device (102); and

the received second playback system clock signal; providing, by the first playback device (102) to a first playback system timing server (865e), the derived virtual clock signal for use by the first playback system timing server (865) as a first playback system clock signal;

decoding, by the second playback device (102) the converted first playback system audio stream at a rate set by the first playback system clock signal;

playing back, by the second playback device (102) the decoded first playback system audio signal in synchrony with the first playback device (102) and optionally the first device (104) of the second playback system.

12. The method of any of claims 2 to 11, further comprising:

receiving, by a second device of the second playback system via a network interface:

the second media playback system audio stream; and

the second playback system clock signal;

decoding, by the second device, the second media playback system audio stream at a rate set by the second playback system clock signal; and

playing back, by the second device, the decoded audio stream in synchrony with the first playback device (102).

13. The method of any of claims 2 to 12, further comprising:

receiving, by the first device (104) of the second playback system from the timing server (869) of the second playback system, the second playback system clock signal;

decoding, by the first device (104), the second media playback system audio stream at a rate set by the second playback system clock signal; and

playing back, by the first device, the decoded audio stream in synchrony with the first playback device (102).

14. The method of any preceding claim, wherein the first media playback system comprises one or more playback devices which are independently clocked.

15. The method of any preceding claim, wherein the playback devices of the first media playback system are configured to performs synchronous playback by:

sending, by a particular playback device that is a group coordinator of a synchrony group of one or more playback devices of the first playback system to other playback devices in the group, information regarding a device clock of the particular playback device; determining, by the other playback devices, respective timing differentials between the particular playback device clock and their respective clocks: and

adjusting, by the other playback devices based on the determined timing differentials, one of:

respective clocks of the other playback devices; and

sampling rates of audio playback by the other playback devices.

16. The method of claim 1, further comprising

receiving, by the first playback device (102), metadata associated with the audio content from one of:

the second playback system audio stream; and

a second playback system data stream;

sending, by the first playback device (102) via the network interface to one or more first playback system control devices, first playback system signaling representing the received metadata in a first format, wherein playback devices (102) of the first playback system are configured for exchanging control and timing information in the first format; receiving, by the one or more first playback system control devices via respective network interfaces, the first playback system signaling representing the received metadata; and

displaying, by the one or more first playback system control devices, in respective first playback system control applications, the received metadata.

17. The method of any preceding claim, further comprising:

while playing back, by the first playback device (102), a first media item of the decoded first playback system audio stream, receiving a first playback system playback command via one of:

a network interface of the first playback device (102) from a first playback system control device; and

a user interface of the first playback device (102); and

converting, by the first playback device (102), the first playback system playback command to a corresponding second playback system playback command a second format suitable for processing by devices of the second playback system, and

sending, by the first playback device (102) via a network interface to the first device (104), the converted second playback system playback command; and receiving, by the first device (104), the converted second playback system playback command; and

in response to receiving the converted second playback system playback command, causing, by the first device (104) via a network interface, a synchrony group comprising at least the first device (104) and the first playback device (102) to perform the converted second playback system playback command.

18. The method of claim 17, wherein:

the first playback system playback command is a particular command that causes the synchrony group to play back a second media item; and

causing the synchrony group to perform the converted second playback system playback command comprises:

streaming, via the network interface of the first device (104), the second media item from the one or more servers; and

sending, to the first playback device (102) via the network interface of the first device (104), a second playback system audio stream corresponding to the second media item;

receiving, by a network interface of the first playback device (102), the second playback system audio stream,

converting, by the first playback device (102), the second playback system audio stream to a first playback system audio stream;

decoding, by the first playback device (102), the first playback system audio stream; and

playing back, by the first playback device (102), the decoded first playback system audio stream.

19. The method of claim 17, wherein:

the first playback system playback command is a particular command that causes the synchrony group to adjust volume; and

causing the synchrony group to carry out the converted second playback system playback command comprises:

sending, by the first device (104) via a network interface to one or more members of the second playback system synchrony group, the second playback system playback command; receiving, by a network interface of the first playback device (102), the second playback system playback command; and

adjusting, by the first playback device (102), volume according to the received second playback system playback command.

20. The method of any preceding claim, further comprising:

converting, by the first playback device (102), the second playback system playback command to a first playback system playback command in the first format; and

distributing, by the first playback device (102), the converted first playback system playback command to at least the second playback device (102) of the first playback system, wherein the first playback device (102) is a group coordinator of a synchrony group comprising the first and second playback devices (102).

21. The method of any preceding claim, further comprising:

receiving, by the second playback device (102) of the first playback system via a network interface, the second playback system audio stream;

generating, by the second playback device (102), based on the second playback system audio stream, a first playback system audio stream;

decoding, by the second playback device (102), the first playback system audio stream; and

playing back, by the second playback device (102), the decoded first playback system audio stream in synchrony with the first playback device (102).

22. The method of claim 20, further comprising:

sending, by the first playback device (102) via a network interface to one or more other playback devices (102) of the first playback system that are members of a synchrony group comprising the first playback device (102), the first playback system audio stream; receiving, from the first playback device (102) by at least a second playback device (102) of the first playback system via a network interface, the first playback system audio stream;

decoding, by the second playback device (102), the first playback system audio stream; and

playing back, by the second playback device (102), the decoded first playback system audio stream in synchrony with the first playback device (102).

23. The method of one of claims 1 to 19, further comprising:

distributing, by the first playback device (104), the converted first playback system audio stream to at least a second playback device (102) of the first playback system;

receiving, by a network interface of a second playback device (102) of the first playback system, the first playback system audio stream;

distributing, by the second playback device (104), the received first playback system audio stream to at least a third playback device (102) of the first playback system, wherein the third playback device (102) and the second playback device (102) are members of a synchrony group of which the second playback device (102) is the group coordinator;

receiving, via a network interface of the third playback device, the first playback system audio stream from the second playback device (102);

decoding, by the second playback device (102), the first playback system audio stream received from the first playback device (102); and

decoding, by the third playback device (102), the first playback system audio stream received from the second playback device (102); and

playing back, by the second and third playback devices (102), the respective decoded first playback system audio stream in synchrony.

24. The method of any preceding claim, further comprising:

receiving, by a second device of the second playback system via a network interface, the second media playback system audio stream; and

decoding, by the second device, the second media playback system audio stream; and playing back, by the second device, the decoded audio stream in synchrony with the first playback device (102).

25. The method of any preceding claim, further comprising:

decoding, by the first device, the second playback system audio stream; and playing back, by the first device (104), the decoded second playback system audio stream in synchrony with the first playback device (102).

26. The method of claim 17 alone or in combination with any of claims 1 to 25, wherein: the input interface of the first device (104) comprises a touch-sensitive graphical display, and receiving the command to play back audio content comprises:

displaying, on the touch-sensitive graphical display, a user interface of a second playback system control application including controls to select audio content for playback and controls to select a playback device; and

receiving, via the user interface of the second playback system control application, input data representing selection of the audio content for playback and selection of the first playback device,

wherein receiving the selection of the audio content for playback on the first device configures the first device as the second playback device source of the second playback device group and

wherein receiving selection of the first playback device causes the first device to configure the first playback device to receive the audio stream from the first device.

27. The method of claim 25 alone or in combination with any of claims 1 to 25, wherein: the input interface of the first device (104) comprises a microphone array, wherein receiving the command to play back audio content comprises:

receiving, via the microphone array of the first device (104), a voice command to play back the audio content on the first device (104) and the first playback device (102); and

in response to receiving the voice command:

configuring the first device (104) to receive the audio content from a remote server; and

configuring the first playback device (102) to receive the audio content from the first device (104).

28. The method of any preceding claim, further comprising:

detecting, by the first playback device (102), that a second playback system device group ID of the first playback device (102) matches a second playback system group ID of a second playback device (102) of the first playback system; and

in response, automatically forming a new first playback system synchrony group with the second playback device (102) by updating one or more state variables stored on the first playback device (102) and the second playback device (102) to indicate that:

one of the first and second playback devices (102) is a group coordinator of the new first playback system synchrony group, and the other of the first and second playback devices (102) is a group member of the new first playback system synchrony group.

29. The method of any preceding claim, further comprising:

receiving, by the first playback device (102), a second playback system mute command from the first device (104);

translating, by the first playback device (102), the second playback system mute command to one or more corresponding first playback system mute commands comprising one or more of:

enabling a first playback system mute state on the first playback device (102);

storing a current first playback system volume level of the first playback device (102); and

setting the first playback system volume level to zero; and

in response to receiving the second playback system mute command, performing the first playback system mute command by the first playback device (102).

30. The method of claim 1, further comprising:

receiving, via an input interface of a first device (104) of a second playback system, a user input to perform a mute operation corresponding to a second playback system mute command to set volume level to a pre-defmed inaudible volume level; and

sending, by the first device (104) via the network interface to the first playback device (102), the second playback system mute command, and

31. The method of claim 30, further comprising:

while a second playback system synchrony group is muted, receiving, by the first playback device (102) via an input interface of the first playback device (102), a first playback system un-mute command; and

in response to receiving the first playback system un-mute command:

translating the first playback system un-mute command to a second playback system volume adjust command indicating the stored first playback system volume level of the first playback device (102),

sending the second playback system volume adjust command to the first device (104) of the second playback system synchrony group,

disabling the first playback system mute state, and adjusting a volume level of the first playback device to the stored first playback system volume level.

32. The method of claim 31, further comprising:

in response to receiving the first playback system un-mute command, sending, by the first playback device to one or more second playback devices of the first playback system synchrony group, one or more commands to:

disable the first playback system mute state on the one or more second playback devices; and

adjust volume level of the one or more second playback devices to the stored first playback system volume level.

33. The method of claim 31 or 32, wherein receiving, via the input interface of the first playback device, the first playback system un-mute command comprises one of:

receiving, via the network interface of the first playback device from a first playback system control application, one or more messages representing the first playback system un mute command; and

receiving, via physical controls on the first playback device, user input representing the first playback system un-mute command.

34. The method of any of claims 29 to 33, further comprising:

while the second playback system synchrony group is muted, receiving, via the network interface of the first playback device from the first device (104) of the second playback system, a second playback system volume adjust command; and

in response:

disabling, by the first playback device (102), the first playback system mute state, and

adjusting, by the first playback device (102), a volume level of the first playback device (102) to a volume level indicated by the second playback system volume adjust command.

35. The method of claim 34, further comprising:

in response to receiving the second playback system volume adjust command, the first playback device (102): sends, to one or more second playback devices of the first playback system synchrony group, one or more commands to:

disable the first playback system mute state on the one or more second playback devices and

adjust volume level of the one or more second playback devices to the volume level indicated by the second playback system volume adjust command.

36. The method of any of claims 29 to 35, further comprising:

while the second playback system group is muted, receiving, via an input interface of the first playback device, a first playback system volume adjust command; and

in response to receiving the first playback system volume adjust command, the first playback device (102):

translates the first playback system volume adjust command to a second playback system volume adjust command,

sends the second playback system volume adjust command to the first device (104) of the second playback system group,

disables the first playback system mute state, and

adjusts volume level of the first playback device to a volume level indicated by the first playback system volume adjust command.

37. The method of claim 36, further comprising:

in response to receiving the first playback system volume adjust command, sending, by the first playback device (102) to one or more second playback devices of the first playback system synchrony group, one or more commands to:

disable the first playback system mute state on the one or more second playback devices and

adjust volume level of the one or more second playback devices to the volume level indicated by the first playback system volume adjust command.

38. The method of claim 36 or 37, wherein receiving, via the input interface of the first playback device (102), the first playback system volume adjust command comprises one of: receiving, via the network interface of the first playback device (102) from a first playback device control app, one or more messages representing the first playback device volume adjust command; and receiving, via physical controls on the first playback device (102), user input representing selection of a volume up or volume down control.

39. The method of any preceding claim, further comprising:

while the first playback system mute state is enabled on the first playback device (102), receiving, by the first playback device (102), a command to start a playback session with the first device (104) of the second playback system, wherein a volume level of the first device (104) is other than the pre-defmed inaudible volume level; and

in response to receiving the command, the first playback device (102):

joins a second playback system group with the first device (102);

disables the first playback system mute state; and

sets the first playback system volume level to the volume level of the first device of the second playback system.

40. The method of any preceding claim, further comprising:

sending, via the network interface of the first playback device (102) to one or more first playback system control applications on one or more first playback system control devices, status information indicating the first playback system volume level,

wherein the status information causes respective volume sliders of the one or more first playback system control applications to indicate a zero volume level.

41. The method of any preceding claim, wherein the first device (104) comprises the one or more servers.

42. A system comprising the first device (104) and the first playback device (102) configured to perform the method of any preceding claim.

43. A first playback device configured to perform the method of any of claims 1 to 5, 7 to 10, and 14 to 15.

44. A system comprising:

a first playback device according to claim 43; and

the first device of the second playback system configured for: receiving, via an input interface of the second device (102) of a second media playback system, a command to play back audio content; and

sending, by the second device (102) to one or more members of a second media playback system playback group via a network interface of the second device (102), the second playback system audio stream representing audio content, the second playback system audio stream, the second playback system playback group comprising the first playback device of the first playback system.

45. The system of claim 44, further comprising streaming, by the first device of the second playback system, the audio content from one or more servers.

Description:
Virtual Line-in

CROSS REFERENCE TO RELATED APPLICATIONS

[1] This application claims priority to U.S. Application No. 16/119,642 filed on August 31, 2018, which is a non-provisional of U.S. Application No. 62/672,020 filed on May 15, 2018, the disclosures of which are incorporated herein by reference in their entirety.

FIELD OF THE DISCLOSURE

[2] The disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.

BACKGROUND

[3] Options for accessing and listening to digital audio in an out-loud setting were limited until in 2003, when SONOS, Inc. filed for one of its first patent applications, entitled "Method for Synchronizing Audio Playback between Multiple Networked Devices," and began offering a media playback system for sale in 2005. The Sonos Wireless HiFi System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a smartphone, tablet, or computer, one can play what he or she wants in any room that has a networked playback device. Additionally, using the controller, for example, different songs can be streamed to each room with a playback device, rooms can be grouped together for synchronous playback, or the same song can be heard in all rooms synchronously.

[4] Given the ever growing interest in digital media, there continues to be a need to develop consumer-accessible technologies to further enhance the listening experience.

BRIEF DESCRIPTION OF THE DRAWINGS

[5] Features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings where:

[6] Figure 1 shows an example media playback system configuration in which certain embodiments may be practiced;

[7] Figure 2 shows a functional block diagram of an example playback device in accordance with aspects of the disclosure;

[8] Figure 3 shows a functional block diagram of an example control device in accordance with aspects of the disclosure;

[9] Figure 4 shows a functional block diagram of an example virtual line-in device in accordance with aspects of the disclosure.

[10] Figure 5A and 5B are example native control interfaces in accordance with aspects of the disclosure;

[11] Figure 6 is an example virtual line-in (VLI) control interface in accordance with aspects of the disclosure;

[12] Figures 7A, 7B, 7C, and 7D are example functional block diagrams corresponding to example virtual line-in configurations in accordance with aspects of the disclosure;

[13] Figures 8A, 8B, and 8C are example functional block diagrams corresponding to example virtual line-in timing configurations in accordance with aspects of the disclosure; and

[14] Figures 9A and 9B are example functional block diagrams corresponding to example virtual line-in timing configurations in accordance with aspects of the disclosure.

[15] Figure 10 shows a technique to distribute control and audio signals with VLI and native domain groups, according to example implementations.

[16] Figure 11 shows a technique to distribute timing signals with VLI and native domain groups, according to example implementations.

[17] Figure 12 shows a technique to facilitate interoperability between different implementations of mute in the native and VLI domains.

[18] Figure 13 shows a technique to facilitate interoperability between different implementations of playback operations in the native and VLI domains.

[19] The drawings are for the purpose of illustrating example embodiments, but it is understood that the inventions are not limited to the arrangements and instrumentality shown in the drawings. DETAILED DESCRIPTION

I. Overview

[20] Example systems and techniques disclosed herein facilitate interoperability between different media playback systems. While playback devices of different media playback systems may implement some similar features, such playback devices might not be compatible with one another. For instance, while playback devices of respective media playback systems may each support forming playback groups with playback devices of the same media playback system, playback devices of different media playback systems may be unable to form playback groups with one another. Such incompatibility may be inherent as different media playback systems may implement similar features in different ways. Yet, some users may benefit from interoperability between devices of different media playback systems.

[21] Within examples, to enable interoperability between a first media playback system and a second media playback system, the first media playback system may implement a virtual line-in to interface with the second media playback system. The second media playback system, which may be a third-party media playback system (i.e. a media playback system produced by a different manufacturer than the first media playback system, which may be referred to as a first-party media playback system), may make certain information available via an application programming interface. Such information may include control and timing signals, as well as the content itself. However, in various examples disclosed herein, such control and timing signals are provided by the second media playback system in forms used by the second media playback system, which are different from forms utilized by the first media playback system. Forms used by the second media playback system are referred to herein as VLI domain (e.g., VLI timing domain, VLI control domain, etc.) while forms used by the first media playback system are referred to herein as native domain (e.g., native timing domain, native control domain, etc.)

[22] Yet, in examples in which information in this VLI domain is the only information made available by the second media playback system, the first media playback system is dependent upon this VLI domain information to enable interoperability. That is, the first media playback system cannot force the second media playback system to operate in the native domain of the first media playback system, but must instead adapt to the VLI domain of the second media playback system. In one aspect, the first media playback system may treat the VLI domain information provided by the second media playback system as an audio source, or virtual line-in. In various embodiments described below, the VLI domain information may include information from an analog or digital audio signal similar to that received via a more traditional line-in interface, but further includes, e.g., timing information for synchronizing playback and/or other operations between the first and the second playback systems and/or control information.

[23] For example, to synchronize playback among multiple playback devices in a synchrony group, the first media playback system may designate a particular playback device of the synchrony group as a group coordinator. The group coordinator distributes native domain playback and timing information with the other playback devices of the synchrony group (i.e., the group members). The group members match their respective native domain clocks (either physically or logically) to the clock of the group coordinator to facilitate synchronous playback.

[24] The second media playback system may also have a feature of synchronous playback among multiple playback devices in a synchrony group, but implement this feature in a different way. For instance, the VLI timing domain of the second media playback system might use a different protocol (such as real-time transport protocol (RTP) or precision time protocol (PTP)) than the native timing domain of the first media playback system (which might use simplified network time protocol (SNTP), among others). As such, the second media playback system might only make available timing information in a domain that is different from the timing domain used by the first media playback system. These distinct timing domains are inherently not synchronized with one another.

[25] In accordance with embodiments of the disclosure, to facilitate interoperability, one or more playback devices of the first media playback system may adopt the VLI timing domain of the second media playback system in a way that enables the creation of a VLI synchrony group. For instance, if the second media playback system uses PTP, these playback devices of the first media playback system may join the hierarchical master-slave architecture of that protocol as implemented within the second playback device by implementing PTP servers and/or clients (i.e., VLI domain timing servers/clients). For example, in some cases, these playback devices of the first media playback may each implement a PTP server and receive timing information from a PTP server on a PTP master within the second media playback system. Alternatively, a particular playback device of the first media playback system may operate as the PTP master (or more generally, "a timing master") of a VLI group. Yet, to retain compatibility with other playback devices of the first media playback system, these playback devices concurrently maintain one or more clocks in the native timing domain, which may be matched (either physically or logically) to the VLI time domain.

[26] Within examples, each media playback system may utilize one or more control interfaces for controlling playback and other functions of the playback devices in the first and second respective systems. For instance, the first media playback system may implement playback control via a native controller application or "native controller" (i.e., a software controller) running on a smartphone, tablet, or PC, which exchanges native domain control signaling with playback devices of the first media playback system via a network. Likewise, a VLI controller application or "VLI controller" (i.e., a software controller) on one or more devices of the second media playback system may control playback by the devices of the second media playback system. Each controller may present a graphical user interface on one or more displays with various playback controls such as transport controls (e.g., play, pause, and skip forward and backward), volume controls, and media library controls, among other examples to facilitate control of playback by the various devices of each system. Moreover, example playback devices of the first and/or second media playback systems may include their own respective control interfaces (e.g., a playback device may include physical transport controls on its housing).

[27] To create a VLI group, a user may select one or more playback devices of the first and/or second media playback systems using a VLI control interface on a given VLI device. Example VLI control interfaces include a graphical user interface (GUI) of a VLI controller displayed on a VLI device or a voice user interface (VUI) implemented using microphones on a VLI device. Upon selecting one or more playback devices of the first and/or second media playback systems, a VLI session is created. The given VLI device that created the VLI group and initiated the VLI session becomes the "VLI sender," which provides audio and control information to playback devices of the first media playback system which are in the VLI group. These playback devices are referred to as VLI receivers. While operating as the "source of truth" within the VLI group for audio and control information, the VLI sender is not necessarily the "source of truth" for timing information within the VLI group.

[28] Generally, during a VLI session (when the VLI receiver(s) are treating the VLI sender as a virtual line-in), VLI domain control commands received via a control interface of the VLI sender are passed through the virtual line-in and transmitted to each VLI receiver of the VLI group. In addition, audio is sourced by the VLI sender (either from storage on the device or from a network location) and distributed to each of the VLI receivers. [29] The playback and control devices of the first media playback system retain their native domain audio and control signaling concurrently with the VLI signaling. In various embodiments, maintaining native domain signaling enables devices of the first media playback system that are not within the VLI group to interact with devices of the first media playback system that are within the VLI group. In some instances, concurrent VLI and native domain signaling enables compatibility of VLI receivers with certain devices of the first media playback system that might not support becoming VLI receivers or handling VLI signaling, such as legacy devices, which might have hardware incompatibilities.

[30] In some examples, limited two-way control originating within the native control domain, which may be referred to as "backchannel control," is possible. The extent of this backchannel control may be generally restricted or limited by the control functions available over the API of the second media playback system. For instance, the second media playback system may publish API functions corresponding to transport controls, such as play, pause, skip forward, skip backward, etc. When a native domain transport control command is received via a native domain control interface on the VLI receiver, the VLI receiver converts the native domain transport control command to a VLI domain transport control and transmits the VLI domain transport control back to the VLI sender. If a native domain transport control command is received via a native domain control interface on a native domain control device or group member, the native domain transport control command is transmitted to the VLI receiver, which converts the native domain transport control command to a VLI domain transport control and transmits the VLI domain transport control back to the VLI sender. In this manner, native domain control may be at least partially retained during a VLI session.

[31] As noted above, example techniques may involve a virtual line-in. Each of these example implementations may be embodied as a method, a device configured to carry out the implementation, a system of devices configured to carry out the implementation, or a non- transitory computer-readable medium containing instructions that are executable by one or more processors to carry out the implementation, among other examples. It will be understood by one of ordinary skill in the art that this disclosure includes numerous other embodiments, including combinations of the example features described herein. Further, any example operation described as being performed by a given device to illustrate a technique may be performed by any suitable devices, including the devices described herein. Yet further, any device may cause another device to perform any of the operations described herein. [32] While some examples described herein may refer to functions performed by given actors such as "users" and/or other entities, it should be understood that this description is for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves.

[33] Moreover, some functions are described herein as being performed "based on" or "in response to" another element or function. "Based on" should be understood that one element or function is related to another function or element. "In response to" should be understood that one element or function is a necessary result of another function or element. For the sake of brevity, functions are generally described as being based on another function when a functional link exists; however, such disclosure should be understood as disclosing either type of functional relationship.

II. Example Operating Environment

[34] Figure 1 illustrates an example configuration of a system 100 in which one or more embodiments disclosed herein may be implemented. The system 100 as shown is associated with an example home environment having several rooms and spaces, such as for example, an office, a dining room, and a living room. Within these rooms and spaces, the system 100 includes two media playback systems, referred to herein as a native media playback system and a VLI media playback system. The native media playback system includes playback devices 102 (identified individually as playback devices l02a-l02i), and control devices 103 a and l03b (collectively "control devices 103"). The VLI media playback system includes VLI devices 104 (identified individually as VLI devices l04a-l04e), each of which may include control and/or playback functionality. In some embodiments, one or more of the VLI devices, such as mobile and laptop devices l04d and l04e can be configured to concurrently run software of a VLI controller and software for a native controller, which enables users to select between different control interfaces using the same device.

[35] Referring now to the native media playback system, the various playback devices 102 and other network devices, such as one or more of the control devices 103 configured to run the native API software, of the native media playback system may be coupled to one another via point-to-point connections and/or over other connections, which may be wired and/or wireless, via a LAN including a network router 106. For example, the playback device l02g (designated as "Left") may have a point-to-point connection with the playback device l02a (designated as "Right"). In one embodiment, the Left playback device l02g may communicate over the point-to-point connection with the Right playback device l02a. In a related embodiment, the Left playback device l02g may communicate with other network devices via the point-to-point connection and/or other connections via the LAN.

[36] The network router 106 may be coupled to one or more remote computing device(s) 105 via a wide area network (WAN) 107. In some embodiments, the remote computing device(s) 105 may be cloud servers. The remote computing device(s) 105 may be configured to interact with the system 100 in various ways. For example, the remote computing device(s) may be configured to facilitate streaming and controlling playback of media content, such as audio, in the home environment, perhaps as part of providing a streaming audio service via WAN 107. In some examples, the remote computing device(s) 105 may be representative of cloud servers from multiple services, perhaps operated by different entities.

[37] Further aspects relating to the different components of the example system 100 and how the different components may interact to provide a user with a media experience may be found in the following sections. While discussions herein may generally refer to the example system 100, technologies described herein are not limited to applications within, among other things, the home environment as shown in Figure 1. For instance, the technologies described herein may be useful in other home environment configurations comprising more or fewer of any of the playback and/or control devices of the native media playback system. In addition, the technologies described herein may be useful in other home environment configurations comprising more or fewer of any of the VLI devices 104 of the VLI media playback system. Additionally, the technologies described herein may be useful in environments where multi- zone audio may be desired, such as, for example, a commercial setting like a restaurant, mall or airport, a vehicle like a sports utility vehicle (SUV), bus or car, a ship or boat, an airplane, and so on.

a. Example Playback Devices

[38] Figure 2 is a functional block diagram illustrating certain aspects of a selected one of the playback devices 102 shown in Figure 1. As shown, such a playback device may include a processor 212, software components 214, memory 216, audio processing components 218, audio amplifier(s) 220, speaker(s) 222, microphone(s) 224, and a network interface 230 including wireless interface(s) 232 and wired interface(s) 234. In some embodiments, a playback device might not include the speaker(s) 222, but rather a speaker interface for connecting the playback device to external speakers. In certain embodiments, the playback device may include neither the speaker(s) 222 nor the audio amplifier(s) 222, but rather a line-out interface for connecting a playback device to an external audio amplifier or audio-visual receiver. The playback device includes a housing 238 carrying its constituent components.

[39] A playback device may further include a user interface 226. The user interface 226 may facilitate user interactions independent of or in conjunction with one or more of the control devices 104 (Figure 1). In various embodiments, the user interface 226 includes one or more of physical buttons and/or graphical user interfaces provided on touch sensitive screen(s) and/or surface(s), among other possibilities, for a user to directly provide input. The user interface 226 may further include one or more of lights and the speaker(s) to provide visual and/or audio feedback to a user.

[40] In some embodiments, the processor 212 may be a clock-driven computing component configured to process input data according to instructions stored in the memory 216. The memory 216 may be a tangible computer-readable medium configured to store instructions executable by the processor 212. For example, the memory 216 may be data storage that can be loaded with one or more of the software components 214 executable by the processor 212 to achieve certain functions. In one example, the functions may involve a playback device retrieving audio data from an audio source or another playback device. In another example, the functions may involve a playback device sending audio data to another device on a network. In yet another example, the functions may involve pairing of a playback device with one or more other playback devices to create a multi-channel audio environment.

[41] Certain functions may involve a playback device synchronizing playback of audio content with one or more other playback devices. During synchronous playback, a listener may not perceive time-delay differences between playback of the audio content by the synchronized playback devices. U.S. Patent No. 8,234,395 filed April 4, 2004, and titled "System and method for synchronizing operations among a plurality of independently clocked digital data processing devices," which is hereby incorporated by reference in its entirety, provides in more detail some examples for audio playback synchronization among playback devices in the native domain.

[42] The audio processing components 218 may include one or more digital -to-analog converters (DAC), an audio preprocessing component, an audio enhancement component or a digital signal processor (DSP), and so on. In some embodiments, one or more of the audio processing components 218 may be a subcomponent of the processor 212. In one example, audio content may be processed and/or intentionally altered by the audio processing components 218 to produce audio signals. The produced audio signals may then be provided to the audio amplifier(s) 210 for amplification and playback through speaker(s) 212. Particularly, the audio amplifier(s) 210 may include devices configured to amplify audio signals to a level for driving one or more of the speakers 212. The speaker(s) 212 may include an individual transducer ( e.g ., a "driver") or a complete speaker system involving an enclosure with one or more drivers. A particular driver of the speaker(s) 212 may include, for example, a subwoofer (e.g., for low frequencies), a mid-range driver (e.g, for middle frequencies), and/or a tweeter (e.g, for high frequencies). In some cases, each transducer in the one or more speakers 212 may be driven by an individual corresponding audio amplifier of the audio amplifier(s) 210. In addition to producing analog signals for playback, the audio processing components 208 may be configured to process audio content to be sent to one or more other playback devices for playback.

[43] Audio content to be processed and/or played back by the playback device 202 may be received from an external source, such as via an audio line-in input connection (e.g, an auto-detecting 3.5mm audio line-in connection) or the network interface 230. During a VLI session, audio is transmitted from a VLI sender of the VLI system to the VLI receiver using the network interface 230. Such a VLI sender may be, for example, one of the VLI devices 104 (Figure 1) or one of the control devices 103 (Figure 1) running the VLI software.

[44] The network interface 230 may be configured to facilitate a data flow between a playback device and one or more other devices on a data network. As such, a playback device may be configured to receive audio content over the data network from one or more other playback devices in communication with a playback device, network devices within a local area network, or audio content sources over a wide area network such as the Internet. In one example, the audio content and other signals transmitted and received by a playback device may be transmitted in the form of digital packet data containing an Internet Protocol (IP)- based source address and IP -based destination addresses. In such a case, the network interface 230 may be configured to parse the digital packet data such that the data destined for a playback device is properly received and processed by the playback device.

[45] As shown, the network interface 230 may include wireless interface(s) 232 and wired interface(s) 234. The wireless interface(s) 232 may provide network interface functions for a playback device to wirelessly communicate with other devices (e.g, other playback device(s), speaker(s), receiver(s), network device(s), control device(s) within a data network the playback device is associated with) in accordance with a communication protocol (e.g, any wireless standard including IEEE 802. l la, 802. l lb, 802. l lg, 802.11h, 802.l lac, 802.15, 4G mobile communication standard, and so on). The wired interface(s) 234 may provide network interface functions for a playback device to communicate over a wired connection with other devices in accordance with a communication protocol ( e.g ., IEEE 802.3). While the network interface 230 shown in Figure 2 includes both wireless interface(s) 232 and wired interface(s) 234, the network interface 230 might include only wireless interface(s) or only wired interface(s) in various examples.

[46] By way of illustration, SONOS, Inc. presently offers (or has offered) for sale certain playback devices including a "PLAY: l," "PLAY:3," "PLAY:5," "PLAYBAR,"

“PLAYBASE,”“BEAM,” "CONNECT: AMP," "CONNECT," and "SUB." Any other past, present, and/or future playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein. Additionally, it is understood that a playback device is not limited to the example illustrated in Figure 2 or to the SONOS product offerings. For example, a playback device may include a wired or wireless headphone. In another example, a playback device may include or interact with a docking station for personal mobile media playback devices. In yet another example, a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use.

b. Example Playback Zone Configurations

[47] Referring back to the system 100 of Figure 1, the environment may have one or more playback zones, each with one or more playback devices 102. The native media playback system may be established with one or more playback zones, after which one or more zones may be added, or removed to arrive at the example configuration shown in Figure 1. Each zone may be given a name according to a different room or space such as an office, bathroom, master bedroom, bedroom, kitchen, dining room, living room, and/or balcony. In one case, a single playback zone may include multiple rooms or spaces. In another case, a single room or space may include multiple playback zones.

[48] In some examples, a VLI media playback system may also be established with one or more playback zones. In some cases, such playback zones are established in different physical rooms or areas as the playback zones of the native media playback systems. Such a configuration may avoid duplicating some functionality by having both native playback devices 102 and VLI devices 104 in same physical area. Alternatively, some playback zones of the VLI media playback system may partially or fully overlap with the same physical room or area as the playback zones of the native media playback system. Such a configuration may be useful as the native playback devices 102 and VLI devices 104 may have some different features. [49] Accordingly, some rooms might not have a playback device 102, but might instead include a VLI device 104 (or no device at all). Other rooms might include both playback device(s) 102 and VLI device(s) 104. For instance, the living room may include playback devices l02a, l02b, l02g, and l02i as the Living Room zone, while also including the VLI device l04e. Some types of playback devices 102 and/or VLI devices 104 may be portable (e.g., battery-powered) while others may draw current from a wall outlet and be intended to be more or less stationary in operation.

[50] In some embodiments, the playback zones in the environment may be presented to a user via a native controller. However, in the native controller, the VLI devices 104 are not presented because they are not native to the native media playback system of playback devices 102. In additional or alternate embodiments, some or all of the zones along with the VLI devices 104 in the environment may be presented to a user via a VLI controller. For example, the native system may identify for the VLI system zones and/or playback devices 102 associated with various zones that are available to be controlled via the VLI controller.

[51] As shown in Figure 1, the Balcony, Dining Room, Kitchen, Bathroom, and Office zones each have one playback device 102, while the Living Room zone has multiple playback devices 102. In the Living Room zone, playback devices l02a, l02b, l02j, and l02k may be configured to play audio content in synchrony as individual playback devices, as one or more bonded playback devices, as one or more consolidated playback devices, or any combination thereof. Moreover, any playback devices 102 of the native media playback system may form such grouping configurations. VLI devices l04a-e may have similar features, albeit implemented in a different manner from the native playback devices 102. For example, the VLI devices l04b and l04c may together form a stereo pair that is represented as a single device by the VLI controller (via, e.g., a GUI presented on the display by the VLI controller) of the VLI system.

[52] In one example, one or more playback zones in the environment of Figure 1 may each be playing different audio content. For instance, the user may be grilling in the balcony zone and listening to hip hop music being played by the playback device l02c while another user may be preparing food in the kitchen zone and listening to classical music being played by the playback device l02h. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the office zone where the playback device l02d is playing the same rock music that is being playing by playback device l02c in the balcony zone. In such a case, playback devices l02c and l02d may be playing the rock music in synchrony such that the user may seamlessly (or at least substantially seamlessly) enjoy the audio content that is being played out-loud while moving between different playback zones. Synchronization among playback zones may be achieved in a manner similar to that of synchronization among playback devices, as described in previously referenced U.S. Patent No. 8,234,395.

[53] As suggested above, the zone configurations of the media playback system 100 may be dynamically modified, and in some embodiments, the media playback system 100 supports numerous configurations. For instance, if a user physically moves one or more playback devices to or from a zone, the media playback system 100 may be reconfigured to accommodate the change(s). For instance, if the user physically moves the playback device l02c from the balcony zone to the office zone, the office zone may now include both the playback device l02d and the playback device l02c. The playback device 102 may be paired or grouped with the office zone and/or renamed if so desired via a control device such as the control devices l03a and l03b. On the other hand, if the one or more playback devices are moved to a particular area in the home environment that is not already a playback zone, a new playback zone may be created for the particular area.

[54] Further, in various examples, one or more of the playback devices 102 may synchronously play back audio with one or more of the VLI devices 104 even though the VLI devices 104 do not form part of the native system of playback devices 102. For instance, in the above example in which the playback device l02c is playing the rock music in the office, the VLI device l04d in the office may also be selected (using, e.g., the VLI controller) to play back the same rock music with the playback device l02c (and/or other playback devices 102). Additionally, the VLI device l04d or another VLI device may initiate playback of the rock music, and other VLI devices 104 and/or playback devices 102 may be grouped and ungrouped for playback of the rock music in the various rooms or spaces in the environment. As such, the VLI devices 104 may seamlessly play back audio in a manner that the user may experience as substantially similar to that of a native media playback system comprising only native domain playback devices 102.

[55] In various implementations, different playback zones of the media playback system 100 may be dynamically combined into zone groups or split up into individual playback zones. For instance, the dining room zone and the kitchen zone may be combined into a zone group for a dinner party such that playback devices l02i and 1021 may render audio content in synchrony. On the other hand, the living room zone may be split into a television zone including playback device l02b, and a listening zone including playback devices l02a, l02j, and l02k, if the user wishes to listen to music in the living room space while another user wishes to watch television

c. Example Control Devices

[56] Figure 3 is a functional block diagram illustrating certain aspects of a selected one of the control devices 103 of the media playback system 100 of Figure 1. Such control devices may also be referred to as a controller device. The control device shown in Figure 3 may include components that are generally similar to certain components of the network devices described above, such as a processor 312, memory 316, and a network interface 330. The control device 103 includes a housing 338 carrying its constituent components.

[57] The memory 316 of the control device 103 may be configured to store controller application software and other data associated with the native media playback system and a user of the native media playback system. The memory 316 may be loaded with one or more software components 314 executable by the processor 312 to achieve certain functions, such as facilitating user access, control, and configuration of the native media playback system. The control device 103 communicates with other network devices over the network interface 330, such as a wireless interface, as described above.

[58] The control device 103 may include a plurality of microphones 324 arranged to detect sound in the environment of the control device 103. The microphones may detect voice inputs to the native media playback system, process those inputs, perhaps using one or more voice assistant services (VAS), and carry out the commands on one or more playback devices 102

[59] The control device 103 may implement various user interfaces 326, such as control interfaces for controlling the playback devices 102 of the native media playback system. Within examples, the control device 103 may be implemented on a general purpose network device via installation of software for a native controller 327 (i.e., a native control application). Example general purpose network devices include, for example, an iPhone iPad or any other smart phone, tablet or network device ( e.g ., a networked computer such as a PC or Mac ). The native controller 327 may be implemented via a control application configured to control the native media playback system or as features of another application, such as a control application of a streaming media service. In another example, a native controller may form part of a control device that is dedicated to the native media playback system, rather than being implemented in, e.g., an application or "app" on a general purpose network device. In addition or alternately, the control device 103 may be implemented in a like manner as a VLI device 104 via installation a VLI controller 428, which is described in greater detail below with reference to Figure 4. In some embodiments, the control device 103 may be configured to control one or both of the native and the VLI media playback systems via separate access over the respective native and VLI controllers.

[60] Playback device control commands such as volume control and audio playback control may also be communicated from a control device to a playback device via the network interface 330. As suggested above, changes to configurations of the native media playback system may also be performed by a user using the control device. The configuration changes may include adding/removing one or more playback devices to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or merged player, separating one or more playback devices from a bonded or merged player, among others. As described above, this control signaling may be referred to as native domain control signaling.

[61] In one example, data and information ( e.g ., such as a state variable) may be communicated between a control device and other devices via the network interface 330. For instance, playback zone and zone group configurations in the native media playback system may be received by a control device from a playback device, another control device, or another network device, or transmitted by the control device to another playback device or control device via the network interface 330. In some cases, the other network device may be another control device.

d. Example VLI Devices

[62] Figure 4 is a functional block diagram illustrating certain aspects of a selected one of the VLI devices 104 of the media playback system 100 of Figure 1. As noted above, a VLI device 104 may implement a timing domain different than that of a native domain playback device 102. In various aspects, a VLI device may include certain playback and/or control functionality that is similar to that of the native media playback system, while other functionality may be different than and/or unavailable to the VLI media playback system. In other words, some VLI devices may be configured to perform specific functions such as control or playback, while others are general purpose and capable of performing either. As such the VLI device shown in Figure 4 may include components that are generally similar to certain components of the playback devices and control devices shown above, such as a processor 412, software components 414, memory 416, audio processing components 418, audio amplifiers 420, microphones 424, and a network interface 430, contained within a housing 438. [63] Example general purpose VLI devices include, for example, an iPhone iPad or any other smart phone, tablet or network device ( e.g ., a networked computer such as a PC or Mac . Generally, a general purpose VLI device is capable of operating as a VLI sender to a VLI receiver. A general purpose network device may become a general purpose VLI device via installation of the VLI controller 428, which may provide controls for playback and/or control functionality in the VLI domain. A control device 103 that operates as a VLI sender may be referred to herein as a VLI device 104 when referring to its VLI domain functionality. A control device 103 that is not configured to operate as a VLI sender, such as a control device that does not have a VLI controller installed, is not referred to as a VLI device 104. e. Example User Interfaces

[64] Figures 5A and 5B are example user interfaces in the form of native control interfaces 5A and 5B (collectively "control interface 526") in accordance with aspects of the disclosure. As noted above, one or more additional or alternate user interfaces, may be provided via a controller app, such as a VUI. In any case, native control interface(s) of, e.g., a control device 103 may be configured to facilitate user access and control of the native media playback system. As an illustrative example, the control interfaces 526a and 526b shown in Figures 5A and 5B may be presented via the native controller 327 (Figure 3A). Referring to Figures 5A and 5B together, the control interface 526 includes a playback control region 542, a playback zone region 543, a playback status region 544, a playback queue region 546, and a sources region 548. The control interface 526 and the respective functions of the native control controller 527 as shown and described below are just one example of user controls that may be provided via a network device such as the control device shown in Figure 3 and accessed by users to control a media playback system such as the native media playback system of system 100. Other control interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.

[65] The playback control region 542 (Figure 5 A) may include selectable (e.g., by way of touch or by using a cursor) icons to cause playback devices in a selected playback zone or zone group to play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode. The playback control region 542 may also include selectable icons to modify equalization settings, and playback volume, among other possibilities.

[66] The playback zone region 543 (Figure 5B) may include representations of playback zones within the native media playback system. The playback zones regions may also include representation of zone groups, such as the Dining Room + Kitchen zone group, as shown. In some embodiments, the graphical representations of playback zones may be selectable to bring up additional selectable icons to manage or configure the playback zones in the media playback system, such as a creation of bonded zones, creation of zone groups, separation of zone groups, and renaming of zone groups, among other possibilities.

[67] For example, as shown, a "group" icon may be provided within each of the graphical representations of playback zones. The "group" icon provided within a graphical representation of a particular zone may be selectable to bring up options to select one or more other zones in the media playback system to be grouped with the particular zone. Once grouped, playback devices in the zones that have been grouped with the particular zone will be configured to play audio content in synchrony with the playback device(s) in the particular zone. Analogously, a "group" icon may be provided within a graphical representation of a zone group. In this case, the "group" icon may be selectable to bring up options to deselect one or more zones in the zone group to be removed from the zone group. Other interactions and implementations for grouping and ungrouping zones via a user interface such as the native control interface 526 are also possible. The representations of playback zones in the playback zone region 543 (Figure 5B) may be dynamically updated as playback zone or zone group configurations are modified.

[68] The playback status region 544 (Figure 5A) may include graphical representations of audio content that is presently being played, previously played, or scheduled to play next in the selected playback zone or zone group. The selected playback zone or zone group may be visually distinguished on the control interface, such as within the playback zone region 543 and/or the playback status region 544. The graphical representations may include track title, artist name, album name, album year, track length, and other relevant information that may be useful for the user to know when controlling the media playback system via the native control interface 526.

[69] The playback queue region 546 may include graphical representations of audio content in a playback queue associated with the selected playback zone or zone group. In some embodiments, each playback zone or zone group may be associated with a playback queue containing information corresponding to zero or more audio items for playback by the playback zone or zone group. For instance, each audio item in the playback queue may comprise a uniform resource identifier (URI), a uniform resource locator (URL) or some other identifier that may be used by a playback device in the playback zone or zone group to find and/or retrieve the audio item from a local audio content source or a networked audio content source, possibly for playback by the playback device.

[70] In one example, a playlist may be added to a playback queue, in which case information corresponding to each audio item in the playlist may be added to the playback queue. In another example, audio items in a playback queue may be saved as a playlist. In a further example, a playback queue may be empty, or populated but "not in use" when the playback zone or zone group is playing continuously streaming audio content, such as Internet radio that may continue to play until otherwise stopped, rather than discrete audio items that have playback durations. In an alternative embodiment, a playback queue can include Internet radio and/or other streaming audio content items and be "in use" when the playback zone or zone group is playing those items. Other examples are also possible.

[71] When playback zones or zone groups are "grouped" or "ungrouped," playback queues associated with the affected playback zones or zone groups may be cleared or re- associated. For example, if a first playback zone including a first playback queue is grouped with a second playback zone including a second playback queue, the established zone group may have an associated playback queue that is initially empty, that contains audio items from the first playback queue (such as if the second playback zone was added to the first playback zone), that contains audio items from the second playback queue (such as if the first playback zone was added to the second playback zone), or a combination of audio items from both the first and second playback queues. Subsequently, if the established zone group is ungrouped, the resulting first playback zone may be re-associated with the previous first playback queue, or be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped. Similarly, the resulting second playback zone may be re-associated with the previous second playback queue, or be associated with a new playback queue that is empty, or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped. Other examples are also possible.

[72] With reference still to Figures 5A and 5B, the graphical representations of audio content in the playback queue region 546 (Figure 5B) may include track titles, artist names, track lengths, and other relevant information associated with the audio content in the playback queue. In one example, graphical representations of audio content may be selectable to bring up additional selectable icons to manage and/or manipulate the playback queue and/or audio content represented in the playback queue. For instance, a represented audio content may be removed from the playback queue, moved to a different position within the playback queue, or selected to be played immediately, or after any currently playing audio content, among other possibilities. A playback queue associated with a playback zone or zone group may be stored in a memory on one or more playback devices in the playback zone or zone group, on a playback device that is not in the playback zone or zone group, and/or some other designated device. Playback of such a playback queue may involve one or more playback devices playing back media items of the queue, perhaps in sequential or random order.

[73] Figure 6 is an example VLI control interface 626 of in accordance with aspects of the disclosure. Similar to the control interface presented by a native control application, such as the control interface 526 shown in Figures 5A and 5B, the control interface(s) of a VLI device 104 may be configured to facilitate user access, control, and/or playback of the native media playback system. For example, as shown in Figure 5, the control interface 626 of the VLI controller 428 (Figure 4) may include several similar regions as the control interface 526 (Figures 5A and 5B) presented via the native controller 327 (Figure 3). For instance, the control interface 526 of the VLI controller 428 includes a playback control region 642 and playback status region 644, which may be generally similar to the playback control region 542 (Figure 5 A) and the playback status region 544 (Figure 5 A), respectively.

[74] The control interface 626 presented by the VLI controller428 also includes a VLI group region 443. As shown, the VLI group region 443 includes a list of available VLI compatible devices within the native media playback system and the VLI media playback system. In the illustrated example, the VLI devices are identified as "Bedroom," "Master Bedroom," "Phone," and "Laptop," which correspond to the respective VLI devices l04a-e shown in Figure 1. The VLI compatible devices within the native media playback system (each identified with an asterisk) are associated with certain native playback devices 102 or zones. For example, the Living Room may be associated with playback devices l02a, l02b, l02g, and l02i; the Balcony may be associated with playback device l02c; the Dining Room may be associated with playback device l02e; the Kitchen may be associated with playback device l02h; and the Office may be associated with playback device l02d.

[75] Notably, the playback device l02f shown in the Bathroom in Figure 1 is not presented on the VLI control interface 426. In some implementations, certain playback devices 102 (such as the playback device l02f) might not support becoming VLI receivers or handling VLI signaling, such as legacy devices, which might have hardware incompatibilities. Within examples, playback device(s) 102 and VLI device(s) 104 may advertise their availability using discovery packets via the LAN provided by router 106. [76] Upon selection of one or more playback device(s) 102 and/or VLI device(s) 104 in the VLI group region on a particular VLI device 104, the particular VLI device 104 creates a VLI session and forms a VLI group of the selected playback device(s) 102 and/or VLI device(s) 104. The particular VLI device 104 becomes the VLI sender for the new VLI session. Other playback device(s) 102 and/or VLI device(s) 104 are VLI receivers, which receive playback and control information from the VLI sender.

f. Example Audio Content Sources

[77] Turning back to Figure 5A, the sources region 548 (Figure 5A) may include graphical representations of selectable audio content sources and voice assistant services. The audio sources in the sources region 548 may be audio content sources from which audio content may be retrieved and played by the selected playback zone or zone group. One or more playback devices in a zone or zone group may be configured to retrieve for playback audio content ( e.g ., according to a corresponding uniform resource identifier (URI) or uniform resource locator (URL) for the audio content) from a variety of available audio content sources. In one example, audio content may be retrieved by a playback device directly from a corresponding audio content source (e.g., a line-in connection). In another example, audio content may be provided to a playback device over a network via one or more other playback devices or network devices.

[78] Example audio content sources may include a memory of one or more playback devices 102 such as the native or VLI media playback systems of Figure 1, local music libraries on one or more network devices (such as a control device, a network-enabled personal computer, or a networked-attached storage (NAS), for example), streaming audio services providing audio content via the Internet (e.g, the cloud), or audio sources connected to the native or VLI media playback systems via a line-in input connection on a playback device or network device, among other possibilities.

[79] In some embodiments, audio content sources may be regularly added or removed from a media playback system such as the media playback system 100 of Figure 1. In one example, an indexing of audio items may be performed whenever one or more audio content sources are added, removed or updated. Indexing of audio items may involve scanning for identifiable audio items in all folders/directory shared over a network accessible by playback devices in the media playback system, and generating or updating an audio content database containing metadata (e.g, title, artist, album, track length, among others) and other associated information, such as a URI or URL for each identifiable audio item found. Other examples for managing and maintaining audio content sources may also be possible. [80] Example techniques described herein involve implementing the virtual line-in as another possible audio source for the native media playback system. In contrast to other audio sources noted above, the virtual line-in is not selectable within a native domain user interface (e.g., the sources region 548 of the native control interface 526). Instead, the virtual line-in is selected as an audio source for one or more playback device(s) 102 by selecting those playback devices 102 using a VLI domain control interface.

[81] Within examples, the virtual line-in source is implemented as a UPnP service. In such an implementation, each operation has a parameter (e.g., a first parameter) that identifies the type of virtual line-in. This allows the native media playback system to support multiple virtual line-in types, such as multiple media playback systems each using their own domains or other types of input that the native media playback system may handle using the virtual line-in implementation described herein, such as a Bluetooth® audio stream.

III. Example VLI Group Control and Audio Distribution

a. Example Group Configurations

[82] Figure 7A is a functional block diagram illustrating example control and audio distribution within a VLI group that includes the native domain playback device l02e. In this example, the VLI device 104 is the VLI sender, which functions to provide audio to the VLI receiver(s), and which in this example is the playback device l02e. The VLI device 104 that creates the VLI group becomes the VLI sender. The VLI device 104 may create the group using any suitable control interface such as the example control interface of the VLI controller 428 shown above or a VUI (e.g., via a voice command such as "Play David Bowie on Dining Room," where the Dining Room zone corresponds to playback device l02e as shown in Figure 1).

[83] As shown in Figure 7A, the VLI sender provides an audio stream to the playback device l02e in the VLI domain. This VLI domain signaling is received at a virtual line-in block 750e, which operates as an interface between the VLI domain of the VLI device 104 and the native domain of the playback device 102. Generally, as noted above, the VLI and native domain signaling is transmitted via a network, such as the LAN provided by router 106 (Figure 1).

[84] The virtual line-in block 750e converts the VLI domain audio stream from the VLI sender to a native domain audio stream and passes the native domain audio stream to the channel sink 754e. The channel sink 754e decodes the native domain audio stream and passes it to the audio stage (not shown in the Figure 7A example) for output. In one aspect, since the channel sink 754e receives the audio stream using native domain signaling, the channel sink 754e can process the audio stream in the same way it would process an audio stream that originated in the native domain. In a related aspect, the conversion performed by the virtual line-in 750e facilitates interoperability between the VLI domain audio stream and the native domain channel sink 754e.

[85] Depending on the encoding format of the VLI domain audio stream, the VLI receiver may transcode the VLI domain audio stream into an uncompressed format. For example, if the VLI domain audio stream is encoded in an encoding format that is not supported by the native domain decoder of the channel sink 754e, the virtual line-in 750e decodes the audio stream to an uncompressed format (e.g., PCM). This configuration may use more bandwidth in distributing audio to downstream group members (not shown in the Figure 7A example), but allows for greater compatibility. Conversely, if the VLI domain audio stream is encoded in an encoding format supported by the native domain, then decoding is not performed until the channel sink 754e. This configuration may have the advantage of being more bandwidth efficient, as the audio stream uses less bandwidth when distributed in compressed format, especially in configurations that include multiple downstream group members.

[86] The virtual line-in 750e also handles metadata from the VLI sender. In some implementations, the virtual line-in 750e extracts metadata from the audio stream and passes the metadata in the native domain to the AVTransport 752e. Alternatively, the VLI sender may send the metadata in a separate data stream using VLI domain control signaling. The playback device l02e in Figure 7 A may be designated a native domain group coordinator as described below. Native domain functions of the native domain group coordinator include distribution of native domain audio and control information (including metadata) to native group members (not shown in Figure 7A), as well as receiving native domain control information from native domain group members and control devices 103. These functions are carried out in the native domain by the AVTransport 752a.

[87] In the example of Figure 7A, since it is the only native playback device 102 in the current VLI group, the playback device l02e is necessarily the native domain group coordinator. In effect, the playback device l02e is a native domain group coordinator for a native domain synchrony group that includes only the playback device l02e. The playback device l02e is effectively distributing audio and control information to itself as the native domain group coordinator. In this case, the playback device l02e need not distribute native domain audio and control information (including metadata) to native group members until such group members are later added. As such, in alternative implementations, a native domain group coordinator might not necessarily be designated when a single native domain playback device 102 is in a VLI group.

[88] Since the AVTransport 752e receives the metadata using native domain signaling, the AVTransport 752e can process the metadata in the same way it would process metadata from an audio stream that originated in the native domain. That is, the AVTransport 752e distributes metadata of the currently playing audio stream (which is received from the VLI sender during a VLI session) to native domain group members and control devices 103. The native controller 327 on the control device 103 receives the metadata and updates the playback status region 544 (Figure 5A) to reflect the audio content currently being played by the VLI group.

[89] The native controller 327 on the control device 103 is also a source of backchannel control. That is, control commands received via the native control controller 327 on the control device 103 (e.g., via the playback control region 542) are passed back to the AVTransport 752e of the group coordinator using native domain control signaling as shown in Figure 7A. The AVTransport 752e passes these control commands to the virtual line-in 750e, which converts the native domain control commands to VLI domain control commands that the VLI sender can understand. For instance, the virtual line-in 750e may convert the native domain control commands to equivalent VLI domain API function calls.

[90] The virtual line-in 750e transmits these VLI domain commands back to the VLI sender. The VLI sender then responds to the commands. For instance, in response to a skip forward command, the VLI sender starts streaming the next song. As another example, in response to a volume increase or decrease command, the VLI sender sends a VLI domain command to increase or decrease volume back to the VLI receivers(s). This VLI domain command is carried out by the virtual line-in 750e and also converted to a native domain command for distribution to native domain group members via the AVTransport 752e. Notably, even though the command may have originated via the native domain control controller 327, the command is not carried out until it is received back from the VLI sender, which is the audio and control master of the VLI group.

[91] Figure 7B is a functional block diagram illustrating example control and audio distribution within a VLI group that includes the playback device l02e and the playback device l02h. Again, in this example, the VLI device 104 is the VLI sender, which, as noted above, functions to provide audio to the VLI receiver(s), which in this example is the playback device l02e. The VLI device 104 may create this VLI group using any suitable control interface such as the example control interface of the VLI controller428 shown above (e.g., by selecting Living Room and Dining Room in the VLI group region 443 or a VUI (e.g., via a voice command such as "Play David Bowie on Dining Room and Living Room").

[92] As shown in Figure 7B, as audio and control master of the VLI group, the VLI sender provides an audio and control signaling to another VLI receiver that has been added as a VLI domain group member of the VLI group (which is playback device l02h in this example). The virtual line-in 750e operates in the same manner as described in the example shown in Figure 7A. The virtual line-in 750h of the playback device l02h operates in a similar manner.

[93] Notably, in addition to being VLI domain group members (VLI receivers), the playback device l02e and the playback device l02h are also native domain group members. Within example implementations, playback devices 102 that are joined into a VLI domain group may automatically form a native domain group. Such auto-grouping may facilitate interoperability with native domain control devices 103 and playback devices 102, as the native domain grouping will be recognizable to native domain devices. As noted above, under native domain grouping, one playback device 102 will function as a group coordinator (which is playback device l02e in this example) while the other playback devices will function as group members.

[94] Due to the native domain grouping, native domain group members (e.g., playback device l02h) may implement a control backchannel to the VLI sender through the native domain group coordinator (e.g., playback device l02e) as shown in Figure 7B. This control backchannel may be implemented in addition to, or as an alternative to, the backchannel implemented through the virtual line-in of each VLI group member. Under this configuration, native domain control signaling from a group member is processed by the group coordinator in the same way as native domain control signaling from a control device 103, as described above in connection with Figure 7A.

[95] Figure 7C is a functional block diagram illustrating example control and audio distribution within a VLI group that includes the playback device l02h. This example is intended to demonstrate the flexibility of the virtual line-in implementation. In this example, the VLI sender created a VLI group that included playback device l02e and then playback was transferred to playback devices l02h and l02d using native domain control. For instance, such a transfer may involve the playback devices l02d and l02h forming a native domain group with playback device l03e using the native controller 327. Then, using native domain control, the playback devices l02e is deselected from the native domain group, thereby creating the configuration shown in Figure 7C. [96] To continue playback from the VLI sender, the playback device l02e continues to operate as VLI receiver, as it was the only VLI receiver in the VLI group. However, instead of operating as a VLI receiver for itself, the playback device l02e operates as a VLI receiver on behalf of the playback device l02h, which becomes the group coordinator of the native domain group when the playback device l02e leaves the native domain group. Since the playback devices l02e is no longer in the native domain group (and no longer functioning as the group coordinator or playing back audio), the AVTransport 752e and Channel Sink 754e are de-activated, as shown. Then, the playback devices l02h and l02d carry out synchronous playback using native domain signaling, also as shown.

[97] Figure 7D is a simplified functional block diagram illustrating example control and audio distribution within a VLI group that includes both VLI devices 104 and playback devices 102 of the native media playback system. In particular, the VLI group includes the playback devices l02e and l02h, as well as any number of VLI devices 104. In this example, the VLI device l04a is the VLI sender. The VLI device l04a may create this VLI group using any suitable control interface such as the example control interface of the VLI controller 428 shown above (e.g., by selecting Kitchen, Dining Room, and one or more of the VLI devices such as Bedroom or Laptop in the VLI group region 643 or a VUI (e.g., via a voice command such as "Play The Modern Lovers on Kitchen, Dining Room, Laptop, and Bedroom").

[98] In addition to the playback devices l02e and l02h, which are VLI receivers in the VLI group, the example configuration shown in Figure 7D also shows the playback device l02f in a native domain grouping with the playback devices l02e and l02h. Notably, the playback device l02f is not in the VLI group. As such, the playback device l02f is not a VLI receiver and instead exchanges native domain audio and control signaling with the group coordinator of the native domain group. As noted above, the playback device l02f may be an example of a native playback device that is a legacy or other device with incompatibilities that do not allow it to become a VLI receiver. Alternately, the playback device may be a native playback device (e.g., playback device l02d) that is capable of but not designated as a VLI receiver in the example implemented in Figure 7D.

b. Example Native Group Interactions

[99] As noted above, selection of one or more playback devices 102 and/or VLI devices 104 using a VLI domain user interface creates a VLI session and forms a VLI domain synchrony group among the selected devices. For instance, if Living Room is selected using the VLI controller, the VLI sender creates a VLI session with a VLI group that includes the playback device l02e as a VLI receiver, as shown in Figure 7A. If the playback device l02e is in any native domain group, the playback device l02e is removed from this group when joined into the VLI domain group.

[100] Continuing the example above, if another playback device 102 is selected using a VLI domain user interface, this playback device 102 is added to the VLI group as a VLI receiver. For instance, as shown in Figure 7B, the playback device l02h is a second VLI receiver. If the playback device l02h is in any native domain group, the playback device l02e is removed from this group when joined into the VLI domain group. Both VLI receivers receive audio and control signaling in the VLI domain from the VLI sender.

[101] Moreover, when two or more playback devices 102 are joined into the same VLI group, these playback devices automatically form a native domain group. Automatic grouping may be implemented using a state variable indicating the group ID of the VLI group. This group ID may represent the VLI sender, perhaps by way of the audio stream originating from the VLI sender. As described above, the devices of the native media playback system may share status information by sharing state variables in a distributed fashion with each player maintaining the current values of state variables of each playback device 102 within the native media playback system. When the group IDs of two or more playback devices 102 match (which occurs when they are part of the same VLI group and are receiving the same audio stream), the native media playback system automatically groups these playback devices into a native domain group using native domain signaling.

[102] In some implementations, automatic native domain grouping is performed by native group coordinators. That is, native group coordinators monitor the group IDs for the native players in the native media playback system. When the group ID of a native group coordinator matches the group ID of another native group coordinator, the native group coordinator may initiate grouping with the other native group coordinator. If these native group coordinators are in synchrony groups with group members, these group members are moved into the automatically formed group by their respective group coordinators. In alternative implementations, the automatic grouping can be performed by group members or native controllers, among other examples.

[103] When automatically forming the native domain group, the native media playback system assigns one playback device 102 to be the group coordinator and the other playback devices to be group members. As noted above, the native domain group coordinator distributes audio and control signaling to the group members. If the native domain group is a stereo pair or surround sound configuration, the preferred group coordinator may be, e.g., the left channel of the stereo pair or the center channel of the surround sound configuration. Other examples are possible as well.

[104] When a playback device 102 is de-selected, the playback device 102 is removed from the VLI group. Since that playback device 102 is no longer receiving the same audio stream from the VLI sender as the other VLI group members, its group ID will no longer match the group IDs of the other VLI group members, which causes the native media playback system to remove it from the native domain group as well. If the VLI receiver that is also the group coordinator of the native domain group is removed from the VLI group, the native media playback system assigns another VLI receiver to be the group coordinator. When the last remaining playback device 102 in a VLI group is de-selected, the VLI session ends.

[105] In the event that all playback device(s) 102 in a VLI group are de-selected at the same time or the VLI session otherwise ends, the native media playback system may automatically maintain the native domain group. The VLI device 104 may de-select all playback device(s) 102 in a VLI group at the same time by selecting another playback target using the VLI group region 443, perhaps by selecting "Play on this device" or the like. Alternatively, the VLI device 104 may de-select all playback device(s) 102 using a VUI (e.g., via a voice command such as "Play David Bowie on Bedroom," where the Bedroom zone corresponds to the VLI device l04a.

[106] In some examples, the VLI sender may transmit a signal to a group coordinator of the native media playback system to indicate that all of the native playback devices have been deselected at the same time. In additional or alternate embodiments, a native domain group is maintained for any playback devices 102 that were de-selected from a VLI group via the VLI controller 428 within a given window of time from when the first playback device was removed from the VLI group. For example, if two or more playback devices 102 are de- selected from a VLI group within a given window of time (e.g., a 3-second window of time), the group of de-selected devices may be maintained in a native group.

[107] In contrast, devices that are de-selected outside of this window of time are not automatically joined to the native domain group. For instance, if a VLI group comprising a VLI device 104 and the playback devices 102 corresponding to the Living Room, Kitchen, and Dining Room zones has the Living Room and Kitchen zones de-selected within a three second window of time from when the first zones was de-selected, the Living Room and the Kitchen zones would together form a native domain group. If the Dining Room zone is later removed from the VLI group (i.e., outside of the window of time), the Dining Room zone would not become part of the native domain Living Room and Kitchen group.

[108] In some cases, a phone call or other event involving audio on the VLI sender may interrupt the VLI session. Such events may take control of the audio pipeline on the VLI sender. These events cause the VLI session to be terminated and then automatically restarted when the event ends.

[109] As noted above, a native domain user interface (such as a GUI or VUI) may interact with the playback device(s) 102 in the VLI group, as demonstrated with the native controller327 on the control device 103 in Figures 7A-7D. Example operations involve adding and removing one or more playback device(s) 102 from the native domain group (which was automatically formed when the VLI group was formed). Removing the group coordinator from the native domain group causes the native media playback system to assign a new VLI receiver (if available) as the group coordinator.

[110] Other example operations include changing the audio source of the native domain group to a non- VLI source. When the audio source of the native domain group is changed to a non-VLI source, the playback device(s) 102 in the VLI group are removed from the VLI group. However, the native media playback system maintains the native domain group and starts playing back audio from the new audio source.

IV. Example VLI Timing Distribution

[111] Figure 8 A is a functional block diagram illustrating example timing distribution within a native domain synchrony group that includes playback devices l02e and l02h. In this example, the playback devices l02e and l02h are not members of a VLI group. The playback device l02e is assigned as the group coordinator, and the playback device l02h is a group member. As the group coordinator, the playback device l02e provides audio and timing information to the group member(s) (i.e., playback device l02h), which enables the group members (playback device l02h) to synchronize its playback to that of the group coordinator. Notably, the playback devices l02e and l02h are independently clocked via a DAC clock 860e and a DAC clock 860h, respectively.

[112] In operation, the sound input device (SID) of the group coordinator (here, the SID/ ADC 86 le of the playback device l02e) receives a digital audio stream or analog signal. If the received audio is analog, the SID/ ADC 86 le digitizes the analog signal into a digital audio stream. In some examples, the SID and ADC may be separate components. In practice, they are often implemented on the same physical CODEC chip and so are shown by way of example as a single component. [113] The SID/ ADC 86 le processes the audio content at a rate set by the DAC clock 860e. Ultimately, the DAC clock 860e sets the clock for the native domain group. That is, group members (e.g., the playback device l02h) match their clocks (either physically or logically) to the DAC clock of the group coordinator (i.e., the DAC clock 860e).

[114] The audio input of the group coordinator (i.e., the audio input 862e) distributes the audio to each channel sink 863 of the native domain synchrony group (i.e., the channel sinks 863e and 863h). Each channel sink 863 of the native domain synchrony group decodes the audio content, which is then converted to analog and outputted by the signal output device and analog-to-digital converter (SOD/D AC 864), as shown in Figure 8A.

[115] To facilitate synchronous playback, a native domain clock such as the DAC clock of the group coordinator provides a reference or basis of the timing information of the timing source for the native domain timing server 865 (i.e., the native domain timing server 865e). The native domain timing server 865 implements a network protocol for clock synchronization such as network time protocol (NTP), simplified network time protocol SNTP, real-time transport protocol (RTP) or precision time protocol (PTP), among others. The native domain timing server 865e distributes timing information according to one of these protocols to the native domain client 866h thereby providing the playback device l02h with a signal representing the current clock time of the group coordinator.

[116] Each native group member can assess a timing differential between the master clock and a given group member’s DAC clock to periodically adjust its respective DAC clock or adjust a sampling rate (depending on the selected corrective mechanism), with the adjustment ultimately achieving synchrony when audio is played back among devices with independently clocked DACs. In particular, this timing signal representing the time domain of the group coordinator is used by the sync 867 of each group member of the native domain synchrony group to keep audio playback of the group member in synchronization with audio playback by the group coordinator. In one implementation, the VCXO of the sync 867h uses the timing signal to "discipline" the DAC clock 860h of the playback device l02h by speeding it up or slowing it down to match the DAC clock 860e. In an alternative implementation, the AsyncSRC of the sync 867 uses the timing signal to cause the channel sink 863h logically insert or remove samples from the audio stream in order for its playback rate to match that of the group coordinator. As noted above, this playback rate is set by the DAC clock 860e.

[117] Certain components that exist in both devices but are not in use in this example are omitted from the block diagram. Generally, each playback device 102 can be assumed to have the same or similar components. Different playback devices 102 may implement similar functions using different hardware (e.g., newer and/or different chips).

[118] Turning now to Figure 8B, shown is a functional block diagram illustrating example timing distribution within a VLI group that includes the playback device l02e. In this example, the playback device l02h is in a native domain group with the playback device l02e. In this example, the VLI device 104 is the timing master. As such, the playback device l02e uses VLI domain synchronization to synchronize with the VLI device 104. However, the playback device l02h uses native domain synchronization to synchronize with the playback device l02e. At the same time, the native domain synchronization by the group coordinator (the playback device l02e) is derived from the VLI domain synchronization. As such, the playback devices l02e and l02h are ultimately synchronized to the VLI time domain of the VLI device 104.

[119] The playback devices 102 of the native media playback system are configured to synchronize to any of a number of clock sources. As such, the timing master can be either a VLI device 104 or a playback device 102 (e.g., a VLI receiver). The timing master selected based on various factors. In an example hierarchy, the VLI sender is selected to be the timing master unless the VLI sender is battery-powered. In that case, to avoid battery drain associated with operating as the timing master, an AC (wall-powered) VLI device 104 is selected as the timing master. If no AC powered VLI device 104 is in the VLI group, then a playback device 102 is selected as the timing master.

[120] As shown in Figure 8B, the VLI domain timing server 869 of the VLI device 104 distributes timing information according to a network protocol for clock synchronization thereby providing the playback device l02e with a signal representing the VLI time domain of the timing master. In some examples, the network protocol for clock synchronization used by the VLI domain timing server is the same protocol (e.g., NTP, SNTP, RTP or PTP, among others) as used by the native domain timing server and client(s). However, even if they use the same protocol, the respective clocks of the VLI domain and the native domain are not synchronized, as the ultimate source of the clock in each domain is different.

[121] Ultimately, the signal representing the VLI time domain is one input to the audio input 862 of each playback device 102 in the VLI group. Another input to the audio input 862 of each playback device 102 in the VLI group (i.e., the VLI receivers) includes the audio stream from the VLI sender, which in this example is the VLI device 104. However, the VLI sender and the VLI timing domain master need not be the same device, as noted above. [122] The audio input 862e of the playback device l02e processes audio content from the VLI sender at a rate set by the CPU clock 859e and passes the audio content to the channel sink 863e, as shown in Figure 8B. Since the playback device l02e is also the native domain group coordinator of a native domain group that include the playback device l02h as a group member, the audio input 862e distributes the audio to the channel sink 863 of the each native domain group member (i.e., the channel sink 863h). Each channel sink 863 decodes the audio content, which is then converted to analog and outputted by the signal output device and digital -to-analog converter (SOD/D AC 864), as shown in Figure 8B.

[123] To facilitate synchronous playback, the audio input 862e of the playback device l02e also generates a skew signal representing the error between the CPU clock 859e and the clock of the VLI domain (which the audio input 862e receives from the VLI domain timing server 869). For instance, the skew signal may represent a number of audio samples that CPU clock 859e is ahead of or behind the VLI domain clock. For instance, consecutive elements of the skew signal might be +3 samples, -2 samples, -1 samples, +2 samples.

[124] The virtual clock 870e uses the CPU clock 859e and the skew signal to derive a virtual clock signal synchronized to the VLI domain master clock, which becomes the master clock in the native domain. That is, in contrast to Figure 8A where the DAC clock 860 of the group coordinator is the native domain timing master, in this example, the native domain timing master is the virtual clock. Since the virtual clock is synchronized to the VLI domain master clock, the native timing domain becomes synchronized to the VLI timing domain when using the virtual clock.

[125] To convert the virtual clock 870e into a native domain format, the virtual clock 870e is used to seed a native domain timing server 865e, which distributes native domain timing information to a native domain timing client 866e on the group coordinator itself, as well as a native domain timing client(s) on each native domain group member (i.e., native domain timing client 866h). From that point, the sync 867 of each playback device 102 synchronizes the audio output of each playback device using the same native domain technique as described with respect to the group member (the playback device l02h) in the Figure 7B example. In other words, the path following the native domain timing client 866 on each playback device 102 is the same as the path following the native domain timing client 866 in the Figure 7A example. Yet, in the Figure 7B example, the master clock is ultimately the VLI domain timing master instead of the DAC clock of the native domain group coordinator.

[126] Figure 8C shows a functional block diagram illustrating a variation on the Figure 8B example in which the playback device l02e is the VLI domain timing master. As such, the playback device l02e implements a VLI domain time server 869e that distributes VLI domain timing information to the VLI sender, as well as any number of VLI receivers, including VLI devices 104 and playback devices 102.

[127] Further, like the Figure 8B example, in the Figure 8C example, the playback device l02h is in a native domain group with the playback device l02e. However, with the exception of the VLI domain timing originating with the VLI domain timing server 869e (instead of the VLI domain timing server 869 on the VLI device 104), all functionality is the same, as shown in Figure 8C.

[128] Turning now to Figure 9A, shown is a simplified functional block diagram illustrating example timing distribution within VLI and native domain groups. The VLI group includes VLI devices l04d and l04e as a VLI sender and receiver, respectively, as well as playback devices l02a, l02b, and l02g as VLI receivers. In addition to playback devices l02a, l02b, and l02g, the native domain group includes the playback device l02d, which was added to the native domain group using native domain signaling. As such, the playback device l02d is not a VLI receiver of the VLI group. TOhe playback device l02a is the group coordinator of the native domain group.

[129] In the Figure 9A example, the playback device l02a is the VLI timing master. As such, the playback device l02a distributes VLI timing information to each VLI receiver in the VLI group, which includes the VLI devices l04d and l04e, as well as the playback devices l02b and l02g. The playback devices l02b and l02g operating as VLI receivers process the audio from the VLI sender and the VLI domain timing information from the VLI timing master in a similar manner as playback device l02e in the Figure 8B example. In this example, the VLI domain media playback system utilizes the PTP protocol, so VLI domain timing information is labeled PTP.

[130] The playback device l02a processes the processes the audio from the VLI sender and the VLI domain timing information in a similar manner as playback device l02e in the Figure 8C example, as the VLI domain timing server is implemented in the playback device l02a. Moreover, the playback device l02a distributes native domain audio and timing information to the playback device l02d in a similar manner as the playback device l02e and playback device l02h in the Figure 8C example. In this example, the native domain media playback system utilizes the SNTP protocol, so VLI domain timing information is labeled SNTP.

[131] Figure 9B shows another simplified functional block diagram illustrating example timing distribution within VLI and native domain groups. The VLI group includes VLI devices l04b and l04c as a VLI receiver and sender, respectively, as well as playback devices l02a, l02b, and l02g as VLI receivers. In addition to playback devices l02a, l02b, and l02g, the native domain group includes the playback device l02d, which was added to the native domain group using native domain signaling. As such, the playback device l02d is not a VLI receiver of the VLI group. The playback device l02a is the group coordinator of the native domain group.

[132] In the Figure 9B example, the VLI device l04b is the VLI timing master. As such, the VLI device l04b distributes VLI timing information to each VLI receiver in the VLI group, which includes the VLI devices l04b and l04c, as well as the playback devices l02a, l02b and l02g. The playback devices l02a, l02b and l02g operating as VLI receivers process the audio from the VLI sender and the VLI domain timing information from the VLI timing master in a similar manner as playback device l02e in the Figure 8B example.

[133] Moreover, the playback device l02a distributes native domain audio and timing information to the playback device l02d in a similar manner as the playback device l02e and playback device l02h in the Figure 8B example. The "legacy" playback device l02d can be added to the native domain synchrony group using the native controller 326 (e.g., via playback zone region 543 (Figure 5B)). In this way, the native playback device l02d can synchronize playback with the VLI group even though the playback device l02d is not a VLI receiver. In other words, native domain group members can synchronize using native domain audio and timing information to a native domain group coordinator that is configured as a VLI reciever.

V. Example Techniques for Improved Interoperability Between VLI and Native Media Playback Systems

[134] As indicated above, to facilitate interoperability between the VLI media playback system and the native domain media playback system, a playback device 102 may convert VLI domain control signaling to native domain control signaling and vice versa. However, in some cases, there might not be an equivalent function or command in the VLI domain for each native domain command. Similarly, the VLI media playback system may implement certain operations in different ways.

[135] To illustrate, in one example, the VLI media playback system and native media playback systems implement mute in different ways. For instance, the VLI media playback system implements mute by lowering volume to a pre-determined inaudible volume level (e.g., -144 dB). In contrast, the native media playback system implements "true" mute, whereby it's possible to have a non-zero volume level with mute enabled with the mute state taking precedence over the group volume level. Within examples, either type of mute command mutes all grouped devices.

[136] In a first example, a VLI session starts with the native domain mute set to disabled. If the VLI sender sets the volume level to the pre-determined inaudible volume level using the VLI domain controller 327, the VLI sender transmits VLI commands to each VLI receiver setting the group volume level to the pre-determined inaudible volume level. The VLI device(s) 104 receiving this VLI domain command carry out the command literally, by setting their group volume level to the pre-determined inaudible volume level.

[137] However, if a given playback device 102 (operating as a VLI receiver) receives this VLI domain command, the playback device 102 does not carry out the command literally. Instead, the playback device 102 enables the native domain mute, stores the current native domain volume level, and sets the native domain volume to zero. Enabling the native domain mute has the side effective of also muting any playback device 102 in a native domain group with the given playback device 102 via the native domain control signaling described above. In addition, storing the current native domain volume level allows the playback device(s) 102 to re-establish the same pre-mute volume level when mute is disabled. Yet further, setting the native domain volume to zero cause the volume slider of any native domain controller 327 to match the volume control of the VLI domain control app 328.

[138] If the given playback device 102 (operating as a VLI receiver) receives a native domain mute command, the playback device 102 does not carry out the command literally. Instead, the playback device 102 sends a proxied volume control command (as discussed in section III for instance) to set the volume level to the pre-determined inaudible volume level. Upon receiving this command, the VLI sender relays this command back to the playback device 102. In response, the playback device 102 enables the native domain mute, stores the current native domain volume level, and sets the native domain volume to zero, in the same manner as described above.

[139] While in the VLI domain mute state, the playback device 102 may receive VLI or native domain commands to change the group volume level to a non -zero level. If the playback device 102 receives a native domain command to set the volume to a given level (e.g., via a volume slider of the native controller 327 or volume button on the playback device 102), the playback device 102 disables the native domain mute state and sets the native domain volume level to the given level. If the playback device 102 receives a VLI domain command to set the volume to a given level (e.g., via a volume slider of the VLI control controller 428), the playback device 102 disables the native domain mute state and sets the native domain volume level to the given level.

[140] In a second example, a VLI session starts with the native domain mute set to enabled. As the VLI session starts, the VLI receivers adjust to the volume level set by the VLI sender. Unless this volume level is the pre-determined inaudible volume level, the playback device 102 disables the native domain mute and sets the native domain volume level to the given level. If the playback device 102 receives a native domain command to disable mute, the playback device 102 disables the native domain mute and keeps the native domain volume level at the volume level set by the VLI sender. If the playback device 102 receives a VLI domain command to set the group volume level to a non-zero volume level, the playback device 102 disables the native domain mute and sets the native domain volume level to the non-zero volume level.

[141] If the VLI session ends with the native domain mute enabled, the playback device 102 retains the current native domain mute state and native domain volume level.

VI. Example Methods

[142] Implementations 1000 and 1100 shown in Figures 10 and 11, respectively present example embodiments of techniques described herein. These example embodiments that can be implemented within an operating environment including, for example, the media playback system 100 of Figure 1, one or more of the playback device 200 of Figure 2, one or more of the control device 300 of Figure 3, one or more of the VLI devices of Figure 4, as well as other devices described herein and/or other suitable devices. Further, operations illustrated by way of example as being performed by a media playback system can be performed by any suitable device, such as a playback device or a control device of a media playback system. Implementation 1000 and 1100 may include one or more operations, functions, or actions as illustrated by one or more of blocks shown in Figures 10 and 11. Although the blocks are illustrated in sequential order, these blocks may also be performed in parallel, and/or in a different order than those described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and/or removed based upon the desired implementation.

[143] In addition, for the implementations disclosed herein, the flowcharts show functionality and operation of one possible implementation of present embodiments. In this regard, each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive. The computer readable medium may include non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache, and Random Access Memory (RAM). The computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. The computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device. In addition, for the implementations disclosed herein, each block may represent circuitry that is wired to perform the specific logical functions in the process

a. Example Methods for VLI Group Control and Audio Distribution

[144] As discussed above, embodiments described herein involve VLI group control and audio distribution. Figure 10 illustrates an example implementation 1000 of a method by which a system distributes control and audio signals within a VLI and native groups.

[145] At block 1002, the implementation 1000 involves receiving a command to play back audio content. For instance, a first VLI device 104 configured as a VLI sender of a VLI group may receive, via an input interface of the first VLI device, a command to play back audio content. To illustrate, referring back to Figure 7A, the VLI device 104 is configured as a VLI sender of a VLI group and may receive a command to play back audio content via the VLI controller 428.

[146] Alternatively, the VLI device 104 may receive the command to play back audio content via a VUI. In such examples, the VLI device 104 may receive, via a microphone array, a voice command to play back the audio content. The voice command may also select the devices in the VLI group by referring to certain VLI devices 104 or native playback devices 102 by reference to their name (e.g., a zone name, such as "Kitchen" or "Living Room").

[147] In some examples, the command to play back audio content may also create a VLI session. Creating a VLI session may involve selecting one or more VLI devices 104 and/or native playback devices 102 and selecting audio content for the VLI group to play back. In some implementations, the VLI device 104 receiving the command to play back audio content becomes the VLI sender of the VLI group. That is, the particular VLI device 104 receiving the command starts the VLI session and accordingly becomes the VLI sender of the VLI group. Other selected devices are then configured as VLI receivers of the VLI group. [148] At block 1004, the implementation 1000 involves streaming the audio content from one or more servers. For example, the first VLI device 104 may stream, via a network interface of the first VLI device, streaming the audio content from one or more servers, such as one or more servers of a streaming audio service or from an audio server on the VLI device 104 itself.

[149] At block 1006, the implementation 1000 involves sending a VLI domain audio stream representing the streamed audio content to one or more VLI receivers of the VLI group. For instance, the VLI device 104 may send, via the network interface of the first VLI device, a VLI domain audio stream representing the streamed audio content to one or more VLI receivers of the VLI group. To illustrate, in Figure 7A, the VLI device 104 sends a VLI domain audio stream to the playback device l02e as VLI sender. As another example, in Figure 7D, the VLI device l04a sends a VLI domain audio stream to playback devices l02e and l02h.

[150] At block 1008, the implementation 1000 involves receiving the VLI domain audio stream representing the streamed audio content. For example, a first native playback device configured as a first VLI receiver of the VLI group may receive, via a network interface of the first native playback device, the VLI domain audio stream representing the streamed audio content.

[151] At block 1010, the implementation 1000 involves converting the VLI domain audio stream to a native domain audio stream. For instance, the first native playback device may convert, via one or more processors, the VLI domain audio stream from the first VLI device to a native domain audio stream. To illustrate, referring back to Figure 7A, the playback device l02e may convert the VLI domain audio stream from the VLI device 104. The playback device l02e may perform such a conversion using the virtual line-in 750e, among other examples.

[152] At block 1012, the implementation 1000 involves decoding the native domain audio stream. For example, the first native playback device may decode, via the one or more processors, the native domain audio stream. By way of example, referring back to Figure 7A, the playback device l02e may decode the native domain audio stream. As one example, the playback device l02e may perform such decoding using the channel sink 863e.

[153] At block 1014, the implementation 1000 involves playing back the decoded native domain audio stream. The first native playback device may play back the decoded native domain audio stream via an audio stage that includes one or more amplifiers (e.g., audio amplifier(s) 220 of the playback device 102 shown in Figure 2) that drive one or more speakers (e.g., speaker(s) 222 shown in Figure 2).

[154] The first native playback device may perform additional functions in various roles as a VLI receiver and/or native domain group coordinator. For instance, the first native playback device may extract metadata associated with the streamed audio content from the VLI domain audio stream. As a VLI receiver, the first native playback device may use this extracted metadata to provide information about the audio content in the VLI domain audio stream to VLI control interfaces.

[155] As the native domain group coordinator, the first native playback device may send, or otherwise distribute, native domain signaling representing the extracted metadata to one or more native domain control devices. For instance, referring back to Figure 7A, the playback device l03e may send native domain signaling representing the extracted metadata to the control device 103. Upon receiving such native domain signaling, the control device 103 may update the native controller 327 to display an indication of the metadata. For example, the control device 103 may update control interfaces such controller interfaces 526a and 526b in Figures 5A and 5B, respectively, to provide information on the currently playing audio content by way of the extracted metadata.

[156] In some instances, the native control device 103 sends native domain playback commands to the first native playback device when user input is provided to the native control device 103 (e.g., via the native controller 327). When the first native playback device receives a native domain playback command, the first native playback device might not carry out that command. Instead, the first native playback device may convert the native domain playback command to a corresponding VLI domain instruction and send the VLI domain instruction to the first VLI device. For instance, referring again to Figure 7A, when the playback device l03e receives a native domain playback command from the native control device 103, the playback device l03e converts that native domain playback command to a corresponding VLI domain instruction and sends the VLI domain instruction to the VLI device 104 (i.e., the VLI sender). Then, the VLI device 104, as the VLI sender, based on the VLI domain instruction, causes the VLI group (including the first native playback device as a VLI receiver) to carry out a VLI domain playback command corresponding to the native domain playback command. As noted above, example playback commands in both the VLI and native domains include transport control (e.g., play/pause, skip forward/backward, fast forward/rewind, stop, etc.) and volume control (e.g., volume up/down and mute). [157] In an example, the native domain playback command is a particular native domain playback command that, when carried out by the VLI group, causes the VLI group to play back a second audio track. Such a command might be a skip forward or backward, or a command to play the second audio track (e.g., a selection of the second audio track from a queue or playlist). In such an example, the first VLI device (e.g., VLI device 104) may carry out the VLI domain playback command corresponding to the native domain playback command by streaming, via the network interface of the first VLI device, the second audio track from the one or more servers and sending, via the network interface of the first VLI device, a VLI domain audio stream representing the second audio track to the first native playback device to the VLI receivers of the VLI group.

[158] The first native playback device (as a VLI receiver) then receives the VLI domain audio stream representing the second audio track. Then, the first native playback device converts the VLI domain audio stream representing the second audio track to a native domain audio stream representing the second audio track. The first native playback device then handles the converted audio stream as it would any native domain audio stream. As described above, the first native playback device decodes the native domain audio stream representing the second audio track and plays back the decoded native domain audio stream via its audio stage.

[159] In another example, the native domain playback command is a particular native domain playback command that, when carried out by the VLI group, causes the VLI group to adjust volume (e.g., a volume up/down or mute command). In this example, the first VLI device (e.g., VLI device 104) may carry out the VLI domain playback command corresponding to the native domain playback command by sending the VLI domain playback command to one or more VLI receivers of the VLI group. The VLI receivers (e.g., the first native playback device) receive this VLI domain playback command and adjust volume according to the VLI domain playback command.

[160] In some examples, the VLI group includes a second VLI receiver, which may be a second VLI device or a second native playback device. For instance, the playback device l02b in Figure 9A is an example of a second native playback device configured as a second VLI receiver and the VLI device l04e is an example of a second VLI device configured as a second VLI receiver. In each case, the second VLI receiver receives the VLI domain audio stream representing the streamed audio content from the VLI sender. The second native playback device converts the VLI domain audio stream to the native domain audio stream and then decodes and plays back the native domain audio stream. In contrast, the second VLI device decodes and plays back the VLI domain audio stream. In each case, the playback is in synchrony with the other VLI receivers.

[161] When the first native playback device is a native domain group coordinator of a native domain synchrony group that includes a second native playback device configured as a native domain group member, the first native playback device may perform certain functions to facilitate concurrent synchronous playback by the VLI group and the native domain synchrony group. For instance, the native domain group coordinator may distribute the native domain audio stream to one or more native domain group members for playback by the one or more native domain group members. In addition, the native domain group coordinator may convert the VLI domain playback commands to corresponding native domain playback commands and distribute the native domain playback command to the native domain group member(s) of the native domain synchrony group. Since the native domain group member(s) are not part of the VLI group, such functions facilitate the native domain group member(s) performing playback in the same way as the VLI group members. As an example, in Figure 7D, the playback device l02e is a native domain group coordinator of a native domain synchrony group that includes the playback device l02f as a group member.

[162] Within examples, the first native playback device need not be the native domain group coordinator and a second native playback device can instead be configured as the native domain group coordinator. For instance, referring to Figure 7C, the playback device l02h is configured as the group coordinator of a native domain synchrony group that includes the playback device l02f as a native domain group member. Notably, the playback device l02h is not configured as a VLI receiver. The playback device l02h receives the native domain audio stream from the playback device l02e, which is configured as a VLI receiver.

[163] In further examples, the first VLI device is configured as a VLI receiver in addition to a VLI sender. As VLI receiver, the first VLI device plays back the audio content in synchrony with the other VLI receivers. In particular, the first VLI device decodes the VLI domain audio stream and plays back the decoded VLI domain audio stream in synchrony with the first native playback device via an audio stage of the first VLI device.

b. Example Methods for VLI Group Timing

[164] As discussed above, embodiments described herein involve VLI group timing to facilitate synchronization of audio playback with native playback devices. Figure 11 illustrates an example implementation 1100 of a method by which a system distributes control and audio signals within a VLI and native groups. [165] At block 1102, the implementation 1100 involves receiving a VLI domain audio stream representing audio content. For instance, a first native playback device configured as a VLI receiver of a VLI group may receive a VLI domain audio stream from a VLI sender of the VLI group. An example first native playback device is the playback device l02e shown in Figures 8B and 8C and an example VLI sender is the VLI device 104 also shown in Figures 8B and 8C.

[166] The VLI sender may receive one or more commands to play back the audio content (and possibly to start a VLI session and to form a VLI group that includes the first native playback device) via a VUI control interface. Based on these command(s), the VLI sender streams the audio content from one or more servers and sends a VLI domain audio stream representing the audio content to one or more VLI receivers of the VLI group. The first native playback device, as a VLI receiver of the VLI group, receives this VLI domain audio stream.

[167] At block 1104, the implementation 1100 involves converting the VLI domain audio stream to a native domain audio stream. For instance, the first native playback device may convert, via one or more processors, the VLI domain audio stream from the first VLI device to a native domain audio stream. To illustrate, referring to Figure 8B, the playback device l02e may convert the VLI domain audio stream from the VLI device 104 to the native domain audio stream at a rate set by a device clock of the first native playback device. Example device clocks include the CPU clock 859e and DAC clock 860e. The playback device l02e may perform such a conversion using the virtual line-in 862e, among other examples.

[168] At block 1106, the implementation 1100 involves receiving a VLI domain clock signal. In some implementations, the first native playback device receives the VLI domain clock signal from a VLI domain timing server on a VLI device 104 such as the VLI sender. An example of this configuration is shown in Figure 8B. Alternatively, the first native playback device receives the VLI domain clock signal from a VLI domain timing server on the first native playback device. An example of this implementation is shown in Figure 8C.

[169] Within examples, the VLI group may select a VLI domain timing master to implement the VLI domain server. In an example implementation, the VLI source is selected as the VLI domain timing master when the VLI source is powered by an external AC power source. However, when the VLI source is powered by an internal battery, the VLI group may select an externally powered VLI receiver, such as the first native playback device or another VLI device 104 configured as a VLI receiver. This hierarchy avoids relatively high power usage (and associated battery drain) that results from implementing a VLI domain timing server and distributing the VLI domain clock signal to VLI group members.

[170] At block 1108, the implementation 1100 involves deriving a virtual clock signal synchronized to the VLI domain clock signal. The first native playback device may derive the virtual clock signal based on a skew signal representing the error between the device clock and the VLI domain clock signal. For example, referring again to Figure 8B, the playback device l02e may generates a skew signal representing the error between the CPU clock 859e and the VLI domain clock signal from the VLI device 104 (which the audio input 862e receives from the VLI domain timing server 869). The skew signal may represent a number of audio samples that CPU clock 859e is ahead of or behind the VLI domain clock. For instance, consecutive elements of the skew signal might be +3 samples, -2 samples, -1 samples, +2 samples.

[171] The virtual clock 870e uses the difference between the CPU clock 859e and the VLI domain clock signal represented by the skew signal to derive a virtual clock signal synchronized to the VLI domain master clock. For instance, the virtual clock 870e may discipline the device clock with the skew signal to generate the virtual clock signal synchronized to the VLI domain clock signal. As shown in Figure 8B, the virtual clock 870e disciplines the CPU clock 859e with the skew signal to generate the virtual clock signal.

[172] At block 1110, the implementation 1100 involves seeding a native domain timing server with the derived virtual clock signal. For example, the first native playback device may provide the derived virtual clock signal as a seed to a native domain timing server to generate a native domain clock signal synchronized to the VLI domain clock signal. To illustrate, referring again to Figure 8B, the playback device l03e provides the virtual clock 870e as a seed to the native domain timing server 865 e.

[173] During native operation, by contrast, the seed to the native domain timing server 865e is the device clock (e.g., the CPU clock 859e or the DAC clock 860e). As such, when the VLI group is formed, the first native playback device may switch the seed of the native domain timing server from the device clock to the derived virtual clock signal. Likewise, when the VLI session ends (and the first native playback device is removed from the VLI group), the first native device switches the seed of the native domain timing server from the derived virtual clock signal to the device clock.

[174] At block 1112, the implementation 1000 involves decoding the native domain audio stream. For instance, the first native playback device may decode, via the one or more processors, the native domain audio stream at a rate set by the native domain clock signal. Since the native domain clock signal is synchronized to the VLI domain clock signal, the first native playback device decodes the native domain audio stream at in synchrony with other VLI receivers. By way of example, referring again to Figure 8B, the channel sink 863e decodes the native domain audio stream from the audio input 862e.

[175] The first native playback device may decode the native domain audio stream at the rate set by the native domain clock signal in several different ways. In some examples, the first native playback device disciplines a digital-to-audio converter clock to the native domain clock signal synchronized to the VLI domain clock signal. In other words, as shown in Figure 8B, the VCXO of the sync 867e uses the native domain clock signal to "discipline" the DAC clock 860e of the playback device l02e by speeding it up or slowing it down to match the native domain clock signal. Alternatively, the first native playback device may insert or remove samples from the native domain audio stream using the native domain clock signal to cause a playback rate of the first native playback device to match VLI domain clock signal. For example, the AsyncSRC of the sync 867e uses the native domain clock signal to cause the channel sink 863 e logically insert or remove samples from the audio stream in order for its playback rate to match that of other VLI group members.

[176] At block 1114, the implementation 1000 involves playing back the decoded native domain audio stream. The first native playback device may play back the decoded native domain audio stream via an audio stage that includes one or more amplifiers (e.g., audio amplifier(s) 220 of the playback device 102 shown in Figure 2) that drive one or more speakers (e.g., speaker(s) 222 shown in Figure 2).

[177] The first native playback device may also be configured as a native domain group coordinator of a native domain synchrony group. In such examples, if the native domain synchrony group includes one or more native domain group members, the first native playback device distributes, to the one or more native domain group members, the native domain audio stream and the native domain clock signal synchronized to the VLI domain clock signal. An example of this is shown in Figure 8B, where the playback device l02e distributes the native domain audio stream and the native domain clock signal to the playback device l02h.

[178] A second native playback device configured as a native domain group member of the native domain synchrony group may perform functions to facilitate synchronous playback with the native domain group coordinator. For instance, the second native playback device may receive the native domain audio stream and the native domain clock signal. The second native playback device may then decode the native domain audio stream at a rate set by the native domain clock signal and play back the decoded native domain audio stream in synchrony with the first native playback device.

[179] In further examples, a second native playback device may be configured as a second VLI receiver of the VLI group. In such examples, the second native playback device performs similar functions as the first native playback device. That is, as the second VLI receiver, the second native playback device, receives the VLI domain audio stream and converts the VLI domain audio stream to a second native domain audio stream at a rate set by a device clock of the second native playback device. The second native playback device also receives the VLI domain clock signal from the VLI domain timing server, derives its own virtual clock signal synchronized to the VLI domain clock signal, and seeds its native domain timing server with that virtual clock. Then, the second native playback device decodes the native domain audio stream at a rate set by the second native domain clock signal and plays back the decoded native domain audio stream in synchrony with the first native playback device. An example of a second native playback device configured as a VLI receiver is the playback device l02b of Figure 9A.

[180] In some examples, the VLI group includes a second VLI device configured as a VLI receiver. For instance, referring still to Figure 9A, the VLI device l04e is configured as a VLI receiver of the VLI group. As the VLI receiver of the VLI group, the VLI device l04e receives the VLI domain audio stream representing the streamed audio content and the VLI domain clock signal. The VLI device l04e decodes the VLI domain audio stream at the rate set by the VLI domain clock signal and plays back the decoded VLI domain audio stream in synchrony with the other VLI receivers.

[181] In some cases, the first VLI device is configured as a VLI receiver in addition to the VLI sender. In such examples, to facilitate synchronous playback with the other VLI receivers, the first VLI device decodes the VLI domain audio stream at the rate set by the VLI domain clock signal and plays back the decoded VLI domain audio stream in synchrony with the other VLI receivers. The first VLI device may receive the VLI domain clock signal from the first native playback device (as illustrated in Figure 9A) or an internal VLI domain timing server (as demonstrated in Figure 9B).

c. Example Methods for VLI and Native Media Playback Interoperability

[182] As noted above, to facilitate interoperability between the VLI media playback system and the native domain media playback system, a native playback device 102 may convert VLI domain control signaling to native domain control signaling and vice versa. However, in some cases, an equivalent function or command in the VLI domain might not exist for each native domain command. Similarly, an equivalent function or command in the native domain might not exist for each VLI domain command. To address such issues, example techniques involve translating VLI commands, such as mute, into functionally different native domain commands, which have the same or similar apparent effect as the VLI domain commands.

[183] Figure 12 illustrates an example implementation 1200 of a method by which a system translates between two different implementations of mute. In this example, each of the VLI and native domain media playback systems perform different functions to achieve a similar result (i.e., muting playback). In particular, in this example, the native media playback system implements "true" mute while the VLI media playback system implements mute using a pre-defmed inaudible volume level.

[184] At block 1202, the implementation 1200 involves receiving user input to perform a mute operation. Similar to other example playback commands described herein, example VLI and native domain media playback systems may receive user input representing a mute command through several different interfaces, such as a GUI (e.g., via a mute button or other mute control), a VUI (e.g., via a voice command such as "Mute playback"), or a dedicated physical mute button or other mute control on a VLI device 104 or a playback device 102. In this example, the user input to perform a mute operation is received via a VLI device 104, which is configured as a VLI sender of a VLI group. For instance, referring back to Figure 7A, the VLI device 104 is configured as a VLI sender of a VLI group that includes the native playback device l02e as a VLI receiver. As an example, the VLI device 104 may receive a user input representing a mute command via the VLI controller 428.

[185] As noted above, in this example, the VLI media playback system implements mute by setting a volume level to a pre-defmed inaudible volume level via a VLI domain mute command. Other media playback systems, such as the native media playback system may implement mute operations using different functions. For instance, the native media playback system may implement a mute command that does not affect the current volume level, but instead truly mutes playback without necessarily changing the current volume level.

[186] At block 1204, the implementation 1200 involves sending the first VLI domain mute command. Similar to other VLI commands described herein, the VLI device 104 that is configured as the VLI sender of the VLI group may send the first VLI domain mute command to one or more VLI receivers of the VLI group. For instance, referring again to Figure 7 A, the VLI device 104 is configured as a VLI sender of a VLI group, while the playback device l02e is configured as a VLI receiver of the VLI group. When carrying out a VLI domain mute, the VLI device 104 may send a first VLI domain mute command to the VLI receivers (which in this example is the playback device l02e) as part of the control signaling in the VLI domain.

[187] Distribution of the first VLI domain mute command to the VLI receivers causes each of the VLI receivers to carry out the mute operation. VLI devices 104 configured as VLI receivers of the VLI group carry out the command literally by carrying out the first VLI domain mute command to set volume level to a pre-defmed inaudible volume level. However, any native playback devices 102 configured as VLI receivers of the VLI group do not carry out the command literally but instead translate first VLI domain mute command to one or more corresponding native domain mute commands.

[188] At block 1206, the implementation 1200 involves receiving the first VLI domain mute command. For instance, a first native playback device 102 configured as a VLI receiver may receive the first VLI domain mute command from the VLI sender. Referring back to Figure 7 A, continuing the example above, the native playback device l02e may receive a VLI domain mute command from the VLI device 104. However, as noted above, the native playback device l02e does not carry out the VLI domain mute command literally.

[189] At block 1208, the implementation 1200 involves translating the first VLI domain mute command to one or more corresponding first native domain mute commands. More particularly, instead of performing the literal playback function to effect mute (i.e., setting volume level to a pre-defmed inaudible volume level), the native playback device 102 instead translates the VLI domain mute command to one or more corresponding first native domain mute commands. These native domain mute commands correspond to different functions, but when carried out lead to the same apparent result (i.e., muting of playback).

[190] To illustrate, in an example, the one or more corresponding first native domain mute commands include enabling a native domain mute state on the first native domain playback device. This is a true mute state and does not in and of itself change the currently set volume level of the native playback device 102. Yet, this mute state has the expected effect of mute.

[191] Since the VLI domain mute command changes volume to the pre-defmed inaudible volume level, to avoid an apparent mismatch between VLI domain and native domain volume level, the one or more corresponding first native domain mute commands also include setting the native domain volume level to zero. This operation maintains consistency between the volume settings of the native playback device 102 and the VLI device 104. In other words, referring to Figure 7A, when a user looks at respective volume controls on the VLI controller 428 and the native controller 327, the volume levels of the VLI group appear to be identical. [192] Further, given the native media playback system's implementation of true mute, a user of the native domain media playback system may expect that the volume level of the native playback device 102 prior to mute will be restored if the VLI group is un-muted. To enable this capability, the one or more corresponding first native domain mute commands may include storing a current native domain volume level of the first native domain playback device before setting the native domain volume level to zero. Then, under appropriate circumstances, the native domain volume level can be restored from the stored level.

[193] At block 1210, the implementation 1200 involves carrying out the one or more corresponding first native domain mute commands to perform the mute operation on the first VLI receiver. For instance, referring to Figure 7A, based on receiving the VLI domain mute command, the first playback device l02e may (i) enable a native domain mute state, (ii) store a current native domain volume level of the first playback device l02e, and (iii) setting the native domain volume level to zero.

[194] Continuing the example above, while the VLI group is muted, the first native playback device 102 may receive a native domain un-mute command. In some examples, receiving such a command may involve receiving, via the network interface of the first native playback device from a native domain control app 327, one or more messages representing the native domain un-mute command. Alternatively, receiving such a command may involve receiving, via physical controls on the first native playback device (e.g., the user interface(s) 226), user input representing the native domain un-mute command.

[195] Based on receiving the native domain un-mute command, the first native playback device disables the native domain mute state and adjusts volume level of the first native playback device to the stored native domain volume level, thereby restoring volume level to the volume level prior to mute. To effect un-mute on the other VLI receivers, the first native playback device translates the native domain un-mute command to a VLI domain volume adjust command, the VLI domain volume adjust command indicating the stored native domain volume level of the first native domain playback device and sends the VLI domain volume adjust command to the VLI sender of the VLI group. Then, the VLI sender distributes the VLI domain volume adjust command to the VLI receivers of the VLI group.

[196] In some cases, the first native playback device 102 is configured as a native domain group coordinator of a native domain synchrony group that includes one or more second native domain playback devices. For instance, referring to Figure 7D, the native playback device l02e is a group coordinator for a native domain synchrony group that includes the playback device l02f. In such cases, based on receiving the native domain un-mute command, the first native playback device l02e sends, to the or more second native domain playback devices, one or more commands to disable the native domain mute state on the one or more second native domain playback devices and adjust volume level of the one or more second native domain playback devices to the stored native domain volume level. These commands have the effect of un-muting the native domain playback devices in the synchrony group so as to maintain synchrony of operation between the VLI group and the native domain synchrony group.

[197] Alternatively, instead of receiving a native domain un-mute command, a first native playback device 102 may receive a VLI domain volume adjust command from the VLI sender while the VLI group is muted. Based on such a command, the first native playback device 102 disables the native domain mute state, and adjusts volume level of the first native playback device 102 to a volume level indicated by the VLI domain volume adjust command. In this case, the stored volume is not restored because the volume level has been changed via the VLI domain volume adjust command.

[198] When a first native playback device 102 is configured as the native domain group coordinator of the native domain synchrony group that includes one or more second native domain playback devices 102, the first native playback device 102 also causes the one or more second native domain playback devices 102 to adjust volume when receiving a VLI domain volume adjust command. That is, based on receiving a VLI domain volume adjust command, the first native playback device 102 sends, to the one or more second native domain playback devices 102, one or more commands to disable the native domain mute state on the one or more second native domain playback devices and to adjust volume level of the one or more second native domain playback devices to the volume level indicated by the VLI domain volume adjust command.

[199] To illustrate, referring to Figure 7D, when the native playback device l02e receives a VLI domain volume adjust command from the VLI device l04a, the native playback device l02e sends one or more native domain commands to the native playback device l02f to disable the native domain mute state on the native playback device l02f and adjust volume level of the native playback device l02f to the volume level indicated by the VLI domain volume adjust command.

[200] In yet another example, while the VLI group is muted, the first native playback device 102 may receive, via an input interface of the first native playback device 102, a native domain volume adjust command. The first native playback device 102 may receive such a command via a native controller 327. The first native playback device 102 may also receive a native domain volume adjust command via a user interface 226 (Figure 2) on a native playback device 102, which may include physical (e.g., capacitive) buttons for volume up and volume down.

[201] In such cases, to maintain synchrony between the first native playback device 102 and other VLI receivers, the first native playback device 102 translates the native domain volume adjust command to a VLI domain volume adjust command and sends the VLI domain volume adjust command to the VLI sender of the VLI group. For instance, referring to Figure 7D, the native playback device l02e may receive, via the user interface(s) 226 on the playback device l02e, a native domain volume adjust command. The native playback device l02e translates the native domain volume adjust command to a VLI domain volume adjust command and sends the VLI domain volume adjust command to the VLI device l04a (which is the VLI sender).

[202] In some implementations, the first native playback device 102 disables the native domain mute state and adjusts volume level of the first native playback device to a volume level indicated by the native domain volume adjust command before receiving back a VLI domain volume adjust command back from the VLI sender. Alternatively, the first native playback device 102 does not alter its mute state or volume level until receiving a corresponding VLI domain command from the VLI sender. In other words, the first native playback device 102 can receive user input, but does not carry out the command(s) corresponding to the user input immediately and instead passes the command(s) back up to the VLI sender. This implementation may avoid issues inherent in distributed systems such as race conditions.

[203] In either case, if the first native playback device 102 is configured as a native domain group coordinator of a native domain synchrony group that includes one or more second native domain playback devices 102, the first native playback device 102 sends, to the one or more second native domain playback devices of the native domain synchrony group, one or more commands to disable the native domain mute state on the one or more second native domain playback devices and adjust volume level of the one or more second native domain playback devices to the volume level indicated by the native domain volume adjust command. Such functionality keeps the native domain playback devices 102 that are not in the VLI group (but are in a native domain synchrony group with a VLI receiver) in a similar state as the VLI devices 104 in the VLI group.

[204] In some cases, a native domain mute state is enabled on a first native playback device 102 when the first native playback device 102 receives a command to start a VLI session with a first VLI device 104. Based on receiving this command, the first native playback device 102 joins the VLI group as the first VLI receiver. The first native playback device 102 also disables the native domain mute state and may also set the native domain volume level to the volume level of the first VLI device 104. As a native domain group coordinator of a native domain synchrony group, the first native playback device 102 also causes native domain group members to perform the same or similar functions. This places the synchronized native playback devices 102 in a similar volume state as the VLI sender.

[205] When volume level is adjusted using either VLI domain or native domain volume adjust or mute commands, the first native playback device 102 updates the native domain controller(s) 327 on native control devices 103 to reflect the current volume level. For instance, in an example, the first native playback device 102 sends, via the network interface of the first native playback device to one or more native domain control applications on one or more native control devices 103, status information indicating the native domain volume level. The status information causes respective volume sliders of the one or more native controllers 327 to indicate the native domain volume level.

[206] For instance, referring to Figures 5A and 6, each of the example native control interface 526a and the VLI control interface 626 include respective volume sliders, as shown. When volume adjustments are made in the VLI domain, the VLI sender updates the VLI control interface 626 to indicate the current volume level. Upon receiving the VLI domain volume adjustment command from the VLI sender, the native domain group coordinator distributes status information to the native control device(s) 103, which update the native control interface 526a to indicate the current volume level (and thereby keep the respective VLI and native domain control applications consistent).

[207] When volume adjustments are made in the native domain, the native domain group coordinator distributes status information to the native control device(s) 103 to indicate to indicate the current volume level. The native domain group coordinator also translates the native domain volume adjustment command to a corresponding VLI domain volume adjustment command and sends the VLI domain volume adjustment command to the VLI sender. The VLI sender carries out the VLI domain volume adjustment command and also updates the VLI control application 428 to indicate the current volume level. In this manner, the respective volume sliders in each of the native control interface 526a and the VLI control interface 626 reflect the current volume level. d. Example Methods for VLI and Native Media Playback Interoperability

[208] Figure 13 illustrates an example implementation 1300 of a method by which a system translates between two different implementations of a various playback commands. In this example, each of the VLI and native domain media playback systems perform different playback functions to achieve a similar playback operation. That is, like the example mute operation described in connection with Figure 12, example VLI and native domain media playback systems may implement playback operations like play/pause, skip, fast-forward, rewind using different functions to achieve similar results. To maintain interoperability, a native playback device 102 may translate between the VLI domain and native domain playback commands that instruct the VLI and native domain device to perform their respective playback functions.

[209] At block 1302, the implementation 1300 involves receiving user input to perform a particular playback operation on the VLI group. In this example, the particular playback operation corresponds to a VLI domain playback command to perform one or more first playback functions. Similar to other example playback commands described herein, example VLI and native domain media playback systems may receive user input representing a particular playback operation via several different interfaces, such as a GUI, a VUI, or a dedicated physical button or other control on a VLI device 104 or a playback device 102.

[210] At block 1304, the implementation 1300 involves sending the VLI domain playback command to perform the one or more first playback functions. Similar to other VLI domain playback commands described herein, the VLI device 104 that is configured as the VLI sender of the VLI group may send the VLI domain playback command to one or more VLI receivers of the VLI group. For instance, referring back to Figure 7D, the VLI device l04a may send a VLI domain playback command to the playback device l02e, the playback device l02h, and the VLI devices 104, as each of these devices is configured as a VLI receiver in the Figure 7D example.

[211] Distribution of the VLI domain playback command to the VLI receivers causes each of the VLI receivers to carry out the mute operation. Referring still to Figure 7D, the VLI devices 104 configured as VLI receivers of the VLI group carry out the command literally by performing the one or more first playback functions. However, the native playback devices l02e and l02h configured as VLI receivers of the VLI group do not carry out the command literally but instead carry out one or more second playback functions to perform the particular playback operation. [212] At block 1306, the implementation 1300 involves receiving the VLI domain playback command to perform one or more first playback functions. For instance, referring to Figure 7D, the native playback device l02e configured as a VLI receiver may receive the VLI domain playback command to perform one or more first playback functions from the VLI device l04a (i.e., the VLI sender). As noted above, the native playback device l02h may also receive the VLI domain playback command to perform one or more first playback functions from the VLI device l04a since the native playback device l02h is also configured as a VLI receiver of the VLI group.

[213] At block 1308, the implementation 1300 involves translating the VLI domain playback command to a corresponding native domain playback command to perform one or more second playback functions. Although the one or more first playback functions and the one or more second playback functions are different functions, when carried out by a VLI device 104 or native playback device 102 respectively, the playback functions each effect the same or similar particular playback operation. For instance, as described above with respect to Figure 12, the particular playback operation may be a mute or un-mute operation, and the VLI and native domain media playback systems may implement this operation with different playback functions.

[214] As another example, the particular playback operation may be a pause operation. In the VLI domain implementation of pause, example VLI devices 104 may cease to stream audio content. As such, when pausing, the one or more first playback functions may exclude streaming audio content. However, in an example native domain implementation of pause, the one or more second playback functions may include continuing to stream the audio content to a buffer while the playback is paused. This implementation may allow the native domain media playback system to begin playback more quickly when operating independently of the VLI domain media playback system (in exclusively the native domain). Yet, even though the VLI devices 104 and native playback devices 102 are performing different functions based on the same command, the VLI devices 104 and native playback devices 102 are both apparently paused from the user's perspective.

[215] Continuing the pause example, in some examples, in the VLI domain implementation of pause, example VLI devices 104 may maintain their amplifier(s) in an active state. As such, when pausing, the one or more first playback functions may exclude disabling an amplifier. In contrast, in the native domain implementation of pause, the one or more second playback functions may include disabling the amplifier(s) of the native playback devices 102 or otherwise placing the amplifier(s) in an inactive state. [216] In further examples, in some cases, the native and VLI domain media playback systems may implement common transport controls such as fast forward, rewind, and skip forward and backward using different defaults. For instance, in an example, a particular playback operation is a fast forward operation. To implement fast forward, the one or more first playback operations in the VLI domain may include jumping forward in audio content by a first number of seconds (e.g., 3 seconds). However, the default fast forward command in the native domain might be to jump forward in the audio content by a second number of seconds (e.g., 5 seconds).

[217] To maintain interoperability, instead of translating a VLI domain fast forward command to the default native domain fast forward command, a native playback device 102 may translate the VLI domain fast forward command to non-default native domain fast forward command to jump forward the same number of seconds as the VLI domain fast forward command. Alternatively, the native playback device 102 may translate the VLI domain fast forward command to the default native domain fast forward command and another native domain command to jump forward or backwards in the audio content such that the combination of native domain jumps is equivalent to the VLI domain jump. In either case, the native playback device 102 maintains interoperability with the VLI domain media playback system by translating to non-default commands.

[218] In other words, the native domain media playback system may implement two or more commands to perform similar operations, and chooses the appropriate command based on context. That is, when not in a VLI group, native playback devices 102 implement default playback commands. However, when in a VLI group, the native playback devices 102 implement alternative playback commands to maintain compatibility with the VLI devices 104.

[219] At block 1310, the implementation 1300 involves carrying out the one or more second playback functions to perform the particular playback operation on the first VLI receiver. For instance, referring to Figure 7D, based on receiving the VLI domain playback command to perform one or more first playback functions, the first playback device l02e may instead perform one or more second playback functions, thereby performing the particular playback operation on the first VLI receiver.

[220] Other functionality of the native playback devices 102, the native control devices 103, and the VLI devices 104 described above in connection to the mute example of Figure 12 are similarly applicable to the example of Figure 13. For instance, a native playback device 102 operating as native domain group coordinator of a native domain synchrony group may distribute commands to perform the one or more second playback functions to native playback device(s) 102 operating as native domain group members of the native domain synchrony group. The native playback device 102 may also update native control devices 103 with updated status information after carrying out the one or more second playback functions. Yet further, as another example, the native playback device 102 may translate a native domain playback command to perform one or more second playback functions to a corresponding VLI domain playback command to perform one or more first playback operations and pass this VLI domain playback command back upstream to the VLI sender. Other examples are possible as well.

VII. Conclusion

[221] The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only way(s) to implement such systems, methods, apparatus, and/or articles of manufacture.

[222] (Feature 1) A method to be performed by one or more devices of a VLI group comprising a first native playback device configured as a first Virtual Line In (VLI) receiver of the VLI group, wherein the VLI group is configured to (i) receive via an input interface of a first VLI device, a command to play back audio content and (ii) send, via a network interface of the first VLI device, a VLI domain audio stream representing the audio content to one or more VLI receivers of the VLI group, and wherein the method comprises receiving, via a network interface of the first native playback device, a VLI domain audio stream representing the audio content; converting, via one or more processors, the VLI domain audio stream to a native domain audio stream at a rate set by a device clock of the first native playback device; receiving, from a VLI domain timing server, a VLI domain clock signal; deriving a virtual clock signal synchronized to the VLI domain clock signal; providing the derived virtual clock signal as a seed to a native domain timing server to generate a native domain clock signal synchronized to the VLI domain clock signal; decoding, via the one or more processors, the native domain audio stream at a rate set by the native domain clock signal; and playing back the decoded native domain audio stream via an audio stage of the first native playback device, the audio stage of the first native playback device comprising one or more amplifiers that drive one or more speakers.

[223] (Feature 2) The method of feature 1, further comprising selecting a VLI domain timing master to implement the VLI domain timing server, wherein the VLI source is selected as the VLI domain timing master when the VLI source is powered by an external AC power source; and wherein the first VLI receiver is selected as the VLI domain timing master when the VLI source is powered by an internal battery.

[224] (Feature 3) The method of feature 1, wherein the first native playback device is configured as a native domain group coordinator of a native domain synchrony group to perform further functions of the method comprising: distributing, to one or more native domain group members, (i) the native domain audio stream and (ii) the native domain clock signal synchronized to the VLI domain clock signal, and wherein a second native playback device is configured as a native domain group member of a native domain synchrony group to perform functions of the method comprising receiving the native domain audio stream and the native domain clock signal synchronized to the VLI domain clock signal; decoding, via one or more processors of the second native playback device, the native domain audio stream at a rate set by the native domain clock signal; and playing back the decoded native domain audio stream via an audio stage of the second native playback device in synchrony with the first native playback device, the audio stage of the second native playback device comprising one or more amplifiers that drive one or more speakers.

[225] (Feature 4) The method of feature 1, wherein deriving the virtual clock signal synchronized to the VLI domain clock signal comprises generating a skew signal representing error between the device clock of the first native playback device and the VLI domain clock signal; and generating the virtual clock signal synchronized to the VLI domain clock signal from the device clock and the skew signal.

[226] (Feature 5) The method of feature 1, wherein decoding the native domain audio stream at the rate set by the native domain clock signal comprises disciplining a digital-to- audio converter clock to the native domain clock signal synchronized to the VLI domain clock signal.

[227] (Feature 6) The method of feature 1, wherein decoding the native domain audio stream at the rate set by the native domain clock signal comprises inserting or removing samples from the native domain audio stream using the native domain clock signal to cause a playback rate of the first native playback device to match VLI domain clock signal. [228] (Feature 7) The method of feature 1, wherein providing the derived virtual clock signal as the seed to the native domain timing server comprises switching the seed of the native domain timing server from the device clock to the derived virtual clock signal when the VLI group initiates playback.

[229] (Feature 8) The method of feature 7, further comprising switching the seed of the native domain timing server from the derived virtual clock signal to the device clock when the VLI group is terminated.

[230] (Feature 9) The method of feature 1, wherein a second native playback device is configured as a second VLI receiver of the VLI group to perform functions of the method comprising receiving, via a network interface of the second native playback device, the VLI domain audio stream representing the audio content; converting, via one or more processors, the VLI domain audio stream to a second native domain audio stream at a rate set by a device clock of the second native playback device; receiving, from the VLI domain timing server, the VLI domain clock signal; deriving a second virtual clock signal synchronized to the VLI domain clock signal; providing the derived second virtual clock signal as a seed to a native domain timing server of the second native playback device to generate a second native domain clock signal synchronized to the VLI domain clock signal; decoding, via the one or more processors, the native domain audio stream at a rate set by the second native domain clock signal; and playing back the decoded native domain audio stream in synchrony with the first native playback device via an audio stage of the second native playback device, the audio stage of the second native playback device comprising one or more amplifiers that drive one or more speakers.

[231] (Feature 10) The method of feature 1, wherein a second VLI device configured as a second VLI receiver of the VLI group to perform functions of the method comprising receiving, via a network interface of the second VLI device, (i) the VLI domain audio stream representing the streamed audio content and (ii) the VLI domain clock signal; decoding the VLI domain audio stream at the rate set by the VLI domain clock signal; and playing back the decoded VLI domain audio stream in synchrony with the first native playback device via an audio stage of the second VLI device, the audio stage of the second VLI device comprising one or more amplifiers that drive one or more speakers.

[232] (Feature 11) The method of feature 1, wherein the input interface of the first VLI device comprises a touch-sensitive graphical display, and wherein receiving the command to play back audio content comprises: displaying, on the touch-sensitive graphical display, a user interface of a VLI control application, the user interface including controls to select audio content for playback and controls to select a playback device; and receiving, via the user interface of the VLI control application, input data representing selection of the audio content for playback and selection of the first native playback device, wherein selection of the audio content for playback on the first VLI device configures the first VLI device as the VLI source of the VLI group and wherein selection of the first native playback device causes the first VLI device to configure the first native playback device as the first VLI receiver of the VLI group.

[233] (Feature 12) The method of feature 1, wherein the first VLI device is configured as a second VLI receiver of the VLI group to perform functions of the method comprising: receiving the VLI domain clock signal; decoding the VLI domain audio stream at the rate set by the VLI domain clock signal; and playing back the decoded VLI domain audio stream in synchrony with the first native playback device via an audio stage of the first VLI device, the audio stage of the first VLI device comprising one or more amplifiers that drive one or more speakers.

[234] (Feature 13) The method of feature 12, wherein the input interface of the first VLI device comprises a microphone array, and wherein receiving the command to play back audio content comprises: receiving, via the microphone array, a voice command to play back the audio content on the first VLI device and the first native playback device; and in response to the voice command, configuring the first VLI device as the VLI source of the VLI group and the first native playback device as the first VLI receiver of the VLI group.

[235] (Feature 14) A system configured to perform the method of any of features 1-13.

[236] (Feature 15) A tangible, non-transitory computer-readable medium having stored therein instructions executable by one or more processors to cause a device to perform the method of any of features 1-13.

[237] (Feature 16) A playback device configured to perform the method of any of features 1-13.

[238] (Feature 17) A method to be performed by a first VLI device configured as a VLI source of a VLI group and a first native playback device configured as a first VLI receiver of the VLI group, the method comprising the first VLI device receiving, via an input interface of the first VLI device, a command to play back audio content; streaming, via a network interface of the first VLI device, the audio content from one or more servers; and sending, via the network interface of the first VLI device, a VLI domain audio stream representing the streamed audio content to one or more VLI receivers of the VLI group. The method further comprising the first native playback device receiving, via a network interface of the first native playback device, the VLI domain audio stream representing the streamed audio content; converting, via one or more processors, the VLI domain audio stream to a native domain audio stream; decoding, via the one or more processors, the native domain audio stream; and playing back the decoded native domain audio stream via an audio stage of the first native playback device, the audio stage of the first native playback device comprising one or more amplifiers that drive one or more speakers.

[239] (Feature 18) The method of feature 17, further comprising the first native playback device receiving metadata associated with the streamed audio content from (a) the VLI domain audio stream or (b) a VLI domain data stream and sending, via the network interface of the first native playback device to one or more native domain control devices, native domain signaling representing the extracted metadata. The method further comprising one or more native domain control devices receiving, via respective network interfaces of the one or more native domain control devices, the native domain signaling representing the received metadata and displaying, in respective native domain control applications, the received metadata.

[240] (Feature 19) The method of feature 17, further comprising while playing back a first audio track of the decoded native domain audio stream, receiving a native domain playback command via (a) via the network interface of the first native playback device from a given native domain control device or (b) via a user interface of the first native playback device; converting the native domain playback command to a corresponding VLI domain instruction; and sending, via the network interface of the first native playback device, the VLI domain instruction to the first VLI device. The method further comprising the first VLI device in response to the VLI domain instruction, causing, via the network interface of the first VLI device, the VLI group to carry out a VLI domain playback command corresponding to the native domain playback command.

[241] (Feature 20) The method of feature 19, wherein the native domain playback command is a particular native domain playback command that, when carried out by the VLI group, causes the VLI group to play back a second audio track, and wherein causing the VLI group to carry out the VLI domain playback command corresponding to the native domain playback command comprises: in response to the VLI domain instruction, streaming, via the network interface of the first VLI device, the second audio track from the one or more servers; and sending, via the network interface of the first VLI device, a VLI domain audio stream representing the second audio track to the first native playback device. The method further comprising the first native playback device receiving, via the network interface of the first native playback device, the VLI domain audio stream representing the second audio track; converting, via one or more processors, the VLI domain audio stream representing the second audio track to a native domain audio stream representing the second audio track; decoding the native domain audio stream representing the second audio track; and playing back, via the audio stage of the first native playback device, the decoded native domain audio stream representing the second audio track.

[242] (Feature 21) The method of feature 19, wherein the native domain playback command is a particular native domain playback command that, when carried out by the VLI group, causes the VLI group to adjust volume, and wherein causing the VLI group to carry out the VLI domain playback command corresponding to the native domain playback command comprises: in response to the VLI domain instruction, sending, via the network interface of the first VLI device, the VLI domain playback command to one or more VLI receivers of the VLI group. The method further comprising first native playback device receiving, via the network interface of the first native playback device, the VLI domain playback command; and adjusting volume according to the VLI domain playback command.

[243] (Feature 22) The method of feature 21, further comprising the first native playback device converting, via one or more processors, the VLI domain playback command to a native domain playback command; and distributing the native domain playback command to a native domain group member of a native domain synchrony group.

[244] (Feature 23) The method of feature 17, further comprising a second native playback device receiving, via a network interface of the second native playback device, the VLI domain audio stream representing the streamed audio content; converting, via one or more processors, the VLI domain audio stream to the native domain audio stream; decoding the native domain audio stream; and playing back the decoded native domain audio stream in synchrony with the first native playback device via an audio stage of the second native playback device, the second native playback device comprising one or more amplifiers that drive one or more speakers.

[245] (Feature 24) The method of feature 23, further comprising the first native playback device distributing, via the network interface of the first native playback device, the native domain audio stream to one or more native domain group members. The method further comprising a third native playback device receiving, via a network interface of the third native playback device, the native domain audio stream; decoding the native domain audio stream; and playing back the decoded native domain audio stream in synchrony with the first native playback device and the second native playback device via an audio stage of the second native playback device, the audio stage of the second native playback device comprising one or more amplifiers that drive one or more speakers.

[246] (Feature 25) The method of feature 17, further comprising the first native playback device distributing, via the network interface of the first native playback device, the native domain audio stream to one or more native domain group members. The method further comprising a second native playback device receiving, via a network interface of the second native playback device, the native domain audio stream; decoding the native domain audio stream; and playing back the decoded native domain audio stream in synchrony with the first native playback device via an audio stage of the second native playback device, the audio stage of the second native playback device comprising one or more amplifiers that drive one or more speakers.

[247] (Feature 26) The method of feature 17, further comprising a second native playback device receiving, via a network interface of the second native playback device from the first native playback device, the native domain audio stream; receiving, via a network interface of the second native playback device from the first native playback device, the native domain audio stream; decoding the native domain audio stream; and playing back the decoded native domain audio stream via an audio stage of the second native playback device, the audio stage of the second native playback device comprising one or more amplifiers that drive one or more speakers. The method further comprising a third native playback device receiving, via a network interface of the third native playback device from the second native playback device, the native domain audio stream; decoding the native domain audio stream; and playing back the decoded native domain audio stream in synchrony with the second native playback device via an audio stage of the second native playback device, the audio stage of the second native playback device comprising one or more amplifiers that drive one or more speakers.

[248] (Feature 27) The method of feature 17, further comprising a second VLI device receiving, via a network interface of the second VLI device, the VLI domain audio stream representing the streamed audio content; decoding the VLI domain audio stream; and playing back the decoded VLI domain audio stream in synchrony with the first native playback device via an audio stage of the second VLI device, the audio stage of the second VLI device comprising one or more amplifiers that drive one or more speakers.

[249] (Feature 28) The method of feature 17, wherein the input interface of the first VLI device comprises a touch-sensitive graphical display, and wherein receiving the command to play back audio content comprises: displaying, on the touch-sensitive graphical display, a user interface of a VLI control application, the user interface including controls to select audio content for playback and controls to select a playback device; and receiving, via the user interface of the VLI control application, input data representing selection of the audio content for playback and selection of the first native playback device, wherein selection of the audio content for playback on the first VLI device configures the first VLI device as the VLI source of the VLI group and wherein selection of the first native playback device causes the first VLI device to configure the first native playback device as the first VLI receiver of the VLI group.

[250] (Feature 29) The method of feature 17, further comprising the first VLI device decoding the VLI domain audio stream; and playing back the decoded VLI domain audio stream in synchrony with the first native playback device via an audio stage of the first VLI device, the audio stage of the first VLI device comprising one or more amplifiers that drive one or more speakers.

[251] (Feature 30) The method of feature 29, wherein the input interface of the first VLI device comprises a microphone array, and wherein receiving the command to play back audio content comprises: receiving, via the microphone array, a voice command to play back the audio content on the first VLI device and the first native playback device; and in response to the voice command, configuring the first VLI device as the VLI source of the VLI group and the first native playback device as the first VLI receiver of the VLI group.

[252] (Feature 31) The method of feature 17, wherein the system further comprises a second native playback device, wherein the first native playback device is configured as a native domain group coordinator of a native domain synchrony group to perform functions comprising: detecting that a VLI group ID of the first native playback device matches a VLI group ID of the second native playback device, wherein the second native playback device is configured as a second VLI receiver of the VLI group; and automatically forming a new native domain synchrony group with the second native playback device based on detecting that the VLI group ID of the first native playback device matches the VLI group ID of the second native playback device, wherein automatically forming the native domain synchrony group comprises updating one or more state variables stored on the first native playback device and the second native playback device to indicate that (a) one of the first and second native playback devices is a native domain group coordinator of the new native domain synchrony group and (b) the other one of the first and second native playback devices is a native domain group member of the new native domain synchrony group.

[253] (Feature 32) The method of feature 17, further comprising the first native playback device receiving, via a network interface of the first native playback device, a VLI domain mute command; translating the VLI domain mute command to one or more corresponding first native domain mute commands, wherein the one or more corresponding first native domain mute commands comprise: (i) enabling a native domain mute state on the first native domain playback device, (ii) storing a current native domain volume level of the first native domain playback device, and (iii) setting the native domain volume level to zero; and in response to receiving the VLI domain mute command, carrying out the one or more corresponding first native domain mute commands to perform the mute operation on the first VLI receiver.

[254] (Feature 33) The method of feature 17, wherein the first VLI device comprises the one or more servers.

[255] (Feature 34) A system configured to perform the method of any of features 17-33.

[256] (Feature 35) A tangible, non-transitory computer-readable medium having stored therein instructions executable by one or more processors to cause a device to perform the method of any of features 17-33.

[257] (Feature 36) A playback device configured to perform the method of any of features 17-33.

[258] (Feature 37) A method to be performed by a first VLI device configured as a VLI source of a VLI group and a first native playback device configured as a first VLI receiver of the VLI group, the method comprising the first VLI device receiving, via an input interface of a first VLI device configured as a VLI sender of the VLI group, user input to perform a mute operation, the mute operation corresponding to a first VLI domain mute command to set volume level to a pre-defmed inaudible volume level and sending, via a network interface of the first VLI device, the first VLI domain mute command. The method further comprising the first native playback device receiving, via a network interface of the first native playback device, the first VLI domain mute command; translating the first VLI domain mute command to one or more corresponding first native domain mute commands, wherein the one or more corresponding first native domain mute commands comprise: (i) enabling a native domain mute state on the first native domain playback device, (ii) storing a current native domain volume level of the first native domain playback device, and (iii) setting the native domain volume level to zero; and in response to receiving the VLI domain mute command, carrying out the one or more corresponding first native domain mute commands to perform the mute operation on the first VLI receiver.

[259] (Feature 38) The method of feature 37, further comprising the first native playback device, while the VLI group is muted, receiving, via an input interface of the first native playback device, a native domain un-mute command; and in response to receiving the native domain un-mute command, (i) translating the native domain un-mute command to a VLI domain volume adjust command, the VLI domain volume adjust command indicating the stored native domain volume level of the first native domain playback device, (ii) sending the VLI domain volume adjust command to the VLI sender of the VLI group, (iii) disabling the native domain mute state, and (iv) adjusting volume level of the first native playback device to the stored native domain volume level.

[260] (Feature 39) The method of feature 38, further comprising the first native playback device, in response to receiving the native domain un-mute command, sending, to one or more second native domain playback devices of the native domain synchrony group, one or more commands to (i) disable the native domain mute state on the one or more second native domain playback devices and (ii) adjust volume level of the one or more second native domain playback devices to the stored native domain volume level.

[261] (Feature 40) The method of feature 38, wherein receiving, via the input interface of the first native playback device, the native domain un-mute command comprises one of: (a) receiving, via the network interface of the first native playback device from a native domain control app, one or more messages representing the native domain un-mute command or (b) receiving, via physical controls on the first native playback device, user input representing the native domain un-mute command.

[262] (Feature 41) The method of feature 37, further comprising the first native playback device, while the VLI group is muted, receiving, via the network interface of the first native playback device from the VLI sender, a VLI domain volume adjust command; and in response to receiving the VLI domain volume adjust command, (i) disabling the native domain mute state, and (ii) adjusting volume level of the first native playback device to a volume level indicated by the VLI domain volume adjust command.

[263] (Feature 42) The method of feature 41, further comprising the first native playback device, in response to receiving the VLI domain volume adjust command, sending, to one or more second native domain playback devices of the native domain synchrony group, one or more commands to (i) disable the native domain mute state on the one or more second native domain playback devices and (ii) adjust volume level of the one or more second native domain playback devices to the volume level indicated by the VLI domain volume adjust command.

[264] (Feature 43) The method of feature 37, further comprising the first native playback device while the VLI group is muted, receiving, via an input interface of the first native playback device, a native domain volume adjust command; and in response to receiving the native domain volume adjust command, (i) translating the native domain volume adjust command to a VLI domain volume adjust command, (ii) sending the VLI domain volume adjust command to the VLI sender of the VLI group, (iii) disabling the native domain mute state, and (iv) adjusting volume level of the first native playback device to a volume level indicated by the native domain volume adjust command.

[265] (Feature 44) The method of feature 43, further comprising the first native playback device in response to receiving the native domain volume adjust command, sending, to one or more second native domain playback devices of the native domain synchrony group, one or more commands to (i) disable the native domain mute state on the one or more second native domain playback devices and (ii) adjust volume level of the one or more second native domain playback devices to the volume level indicated by the native domain volume adjust command.

[266] (Feature 45) The method of feature 43, wherein receiving, via the input interface of the first native playback device, the native domain volume adjust command comprises one of: (a) receiving, via the network interface of the first native playback device from a native domain control app, one or more messages representing the native domain volume adjust command or (b) receiving, via physical controls on the first native playback device, user input representing selection of a volume up or volume down control.

[267] (Feature 46) The method of feature 37, further comprising the first native playback device while the native domain mute state is enabled on the first native playback device, receiving a command to start a VLI session with the first VLI device, wherein a volume level of the first VLI device is other than the pre-defmed inaudible volume level; and in response to receiving the command to start the VLI session with the first VLI device, (i) joining the VLI group as the first VLI receiver, (ii) disabling the native domain mute state, and (iii) setting the native domain volume level to the volume level of the first VLI device.

[268] (Feature 47) The method of feature 37, further comprising the first native playback device sending, via the network interface of the first native playback device to one or more native domain control applications on one or more native domain control devices, status information indicating the native domain volume level, wherein the status information causes respective volume sliders of the one or more native domain control applications to indicate a zero volume level.

[269] (Feature 48) A system configured to perform the method of any of features 37-47. [270] (Feature 49) A tangible, non-transitory computer-readable medium having stored therein instructions executable by one or more processors to cause a device to perform the method of any of features 37-47.

[271] (Feature 50) A playback device configured to perform the method of any of features 37-47.

[272] (Feature 51) A method to be performed by a first VLI device configured as a VLI source of a VLI group and a first native playback device configured as a first VLI receiver of the VLI group, the method comprising the first VLI device receiving, via an input interface of a first VLI device, user input to perform a particular playback operation on the VLI group, the particular playback operation corresponding to a VLI domain playback command to perform one or more first playback functions and , via a network interface of the first VLI device, the VLI domain playback command to perform one or more first playback functions. The method further comprising the first native playback device receiving, via a network interface of the first native playback device, the VLI domain playback command to perform one or more first playback functions; translating the VLI domain playback command to a corresponding native domain playback command to perform one or more second playback functions, wherein one or more first playback functions and the one or more second playback functions are different; and carrying out the one or more second playback functions to perform the particular playback operation on the first VLI receiver.

[273] (Feature 52) The method of feature 51, wherein the particular playback operation is a mute operation, wherein the one or more first playback functions comprise setting volume level of the VLI group to a pre-defmed inaudible volume level, and wherein the one or more second playback functions comprise: (i) enabling a native domain mute state on the first native domain playback device, (ii) storing a current native domain volume level of the first native domain playback device, and (iii) setting the native domain volume level to zero.

[274] (Feature 53) The method of feature 51, wherein the particular playback operation is a pause operation, wherein the one or more first playback functions comprise ceasing to stream audio content while playback is paused, and wherein the one or more second playback functions comprise continuing to stream the audio content to a buffer while the playback is paused.

[275] (Feature 54) The method of feature 51, wherein the particular playback operation is a pause operation, wherein the one or more first playback functions exclude disabling an amplifier, and wherein the one or more second playback functions comprise disabling the amplifier. [276] (Feature 55) The method of feature 51, wherein the particular playback operation is a fast forward operation, wherein the one or more first playback functions comprise jumping forward in an audio track by a first number of seconds, and wherein the one or more second playback functions comprise (i) jumping forward in the audio track by a second number of seconds and (ii) jumping backwards or forwards in the audio track by a third number of seconds, wherein the first number is a sum of the second number and the third number.

[277] (Feature 56) A system configured to perform the method of any of features 51-55.

[278] (Feature 57) A tangible, non-transitory computer-readable medium having stored therein instructions executable by one or more processors to cause a device to perform the method of any of features 51-55.

[279] (Feature 58) A playback device configured to perform the method of any of features 51-55.

[280] The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the forgoing description of embodiments.

[281] When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.