Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HOME THEATRE AUDIO PLAYBACK WITH MULTICHANNEL SATELLITE PLAYBACK DEVICES
Document Type and Number:
WIPO Patent Application WO/2024/073401
Kind Code:
A2
Abstract:
Home theatre audio configurations can include a primary audio playback device (e.g., a soundbar) along with a plurality of discrete satellite playback devices (e.g., left and right surrounds), some or all of which may be capable of multichannel audio playback. Techniques for modifying audio transmission, distribution, and/or playback for such multichannel satellite playback devices are disclosed.

Inventors:
PEACE PAUL (US)
DIZON ROBERTO (US)
Application Number:
PCT/US2023/075105
Publication Date:
April 04, 2024
Filing Date:
September 26, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONOS INC (US)
International Classes:
H04R3/12; G10L19/008; G10L19/02; H04S3/00
Foreign References:
US8234395B22012-07-31
US9706323B22017-07-11
US9763018B12017-09-12
Attorney, Agent or Firm:
LINCICUM, Matt et al. (US)
Download PDF:
Claims:
CLAIMS

1. A method comprising: receiving, at a primary7 playback device, source audio data; obtaining n channels of satellite audio data from the source audio data to be played back via a satellite playback device; downmixing the n source channels of satellite audio data to m downmixed channels of audio satellite data according to a first down mixing scheme, wherein m<n, and wherein the first downmixing scheme is based at least in part on one or more first input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback.

2. The method of claim 1. further comprising: after wirelessly transmiting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary7 playback device, one or more second input parameters different from the one or more first input parameters; receiving, at the primary playback device, second source audio data; downmixing the n channels of satellite audio data to m channels of audio satellite data according to a second downmixing scheme different from the first, wherein the second downmixing scheme is based at least in part on one or more second input parameters; and wirelessly transmiting the downmixed m channels of audio data to the satellite play back device for playback.

3. The method of claim 1 or 2 further comprising: after wirelessly transmiting the downmixed m channels of audio data to the satellite play back device for playback, receiving, at the primary playback device, second source audio data; determining that network conditions are sufficient to transmit n channels of satellite audio data of the second source audio data to the satellite playback device; and wirelessly transmitting the n channels of satellite audio data of the second source audio data to the satellite playback device.

4. The method of any preceding claim, wherein the one or more input parameters comprise one or more of: an audio content parameter, a device location parameter, a listener location parameter, a playback responsibilities parameter, or an environmental acoustics parameter.

5. The method of any preceding claim, further comprising: at the satellite playback device, upmixing the m channels of audio satellite data to generate n channels of audio satellite data; and playing back, via the satellite playback device, the upmixed n channels of audio data.

6. The method of claim 5. wherein playing back the upmixed n channels of audio data comprises arraying the n channels to be output via a plurality of transducers of the satellite playback device such that each of the plurality of transducers outputs at least a portion of each of the n channels.

7. The method of any preceding claim, further comprising playing back audio via the primary playback device in synchrony with playback of n upmixed n channels of audio data via the satellite playback device, wherein the n upmixed channels are upmixed at the satellite playback device.

8. The method of any preceding claim, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises combining audio content from each of the n channels below a threshold frequency into a single one of the m downmixed channels.

9. The method of any preceding claim, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises: mapping a first channel of the n source channels to a first channel of the m downmixed channels; mapping a first portion of a second channel of the n source channels to a first portion of a second channel of the m downmixed channels; and mapping a second portion of a third channel of the n source channels to a second portion of the second channel of the m down mixed channels.

10. The method of claim 9, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises mapping a second portion of the second channel of the n source channels to the second portion of the second channel of the m downmixed channels such that the second portion of the second channel of the m downmixed channels comprises a combination of: the second portion of the second channel of the n source channels; and the second portion of the third channel of the n source channels.

11. The method of any preceding claim, further comprising downmixing the n source channels when wireless network conditions are insufficient to transmit the n source channels of satellite audio data to the satellite playback device.

12. One or more tangible, non-transitory computer-readable media storing instructions that, when executed by one or more processors of a media playback system comprising a primary playback device and a satellite playback device, cause the media playback system to perform the method of any preceding claims.

13. A media playback system comprising: a primary playback device; a satellite playback device; a network interface; and one or more processors configured to cause the media playback device to perform the method of one of claims 1 to 12.

14. A method for a playback device comprising a plurality of audio transducers configured to output audio along a plurality of sound axes including at least a forward-firing axis and a side-firing axis, the method comprising: receiving, at the playback device, multichannel audio content including a first audio channel; playing back at least a first proportion of the first audio channel via the forward-firing axis; obtaining an indication of orientation of the playback device relative to the environment; and based at least in part on the orientation indication, modifying audio playback such that at least a second proportion of the first channel is played back via the side-firing axis rather than via the forward-firing axis.

15. The method of claim 14, wherein modifying the audio playback based at least in part on the orientation indication comprises one of: a third proportion of the first channel is output via the forward-firing axis, the third proportion being less than the first proportion; and none of the first audio channel is output via the forw ard-firing axis.

16. The method of claim 14 or 15, further comprising, while playing back at least the first proportion of the first audio channel via the forward-firing axis, playing back none of the first audio channel via the side-firing axis.

17. The method of one of claims 14 to 16, further comprising: while playing back at least the first proportion of the first audio channel via the forwardfiring axis, playing back a fourth proportion of the first audio channel via the side-firing axis, wherein the second proportion is greater than the fourth proportion; and while playing back at least the second proportion of the first audio channel via the sidefiring axis, playing back a fifth proportion of the first audio channel via the forward-firing axis, wherein the first proportion is greater than the fifth proportion.

18. The method of one of claims 14 to 17, wherein: the playback device is a front left satellite playback device; the first channel is a front left channel; the side-firing axis is a right side-firing axis; and the indication of orientation indicates that the forward-firing axis is oriented forward of an intended listening location within the environment, optionally wherein the right side-firing axis is oriented nearer to the intended listening location than the forward-firing axis.

19. The method of one of claims 14 to 18, wherein: the playback device is a front left satellite playback device; the first channel is a front left channel; the side-firing axis is a left side-firing axis, and the indication of orientation indicates that the forward-firing axis is oriented rearward of an intended listening location within the environment, optionally wherein the left side-firing axis is oriented nearer to the intended listening location than the forward-firing axis.

20. The method of one of claims 14 to 19, wherein obtaining an indication of orientation comprises one of: determining an angular orientation of the playback device; and receiving, via a network interface, an angular orientation of the playback device.

21. One or more tangible, non-transitory computer-readable media storing instructions thereon that, when executed by one or more processors of a playback device, cause the playback device to perform the method of one of claims 14 to 20.

22. A playback device comprising: a plurality of audio transducers configured to output audio along a plurality of sound axes including at least a forward-firing axis and a side-firing axis; one or more processors; and data storage having instructions thereon that, when executed by the one or more processors, cause the playback device to perform the method of one of claims 14 to 20.

23. A method comprising: receiving, at a media playback system comprising a plurality of playback devices including a rear satellite playback device, multi-channel audio content comprising a side surround channel and a rear surround channel; playing back, via the rear satellite playback device: the side surround channel at a first magnitude; and the rear surround channel at a second magnitude; based on receiving a trigger indication, playing back, via the rear satellite playback device: the side surround channel at a third magnitude lower than the first magnitude; and the rear surround channel at a fourth magnitude greater than the second magnitude.

24. The method of claim 23, wherein the rear satellite playback device comprises a plurality of audio transducers.

25. The method of one of claims 23 to 24, further comprising modifying relative playback magnitudes of the side surround channel and the rear surround channel to affect a perceived width of the audio playback.

26. The method of one of claims 23 to 25, further comprising, based on receiving a second trigger indication, playing back, via the rear satellite playback device: the side surround channel at a fifth magnitude lower than the third magnitude; and the rear surround channel at a sixth magnitude greater than the fourth magnitude.

27. The method of one of claims 23 to 26, wherein the side surround channel is a right side surround channel, the rear surround channel is a right rear surround channel, and the rear satellite playback device is a rear right satellite playback device, the method further comprising: while playing back, via the rear right satellite playback device, the right side surround channel at the first magnitude, playing back, via a left rear satellite playback device, a left side surround channel at the first magnitude; while playing back, via the rear right satellite playback device, the rear right surround channel at the second magnitude, playing back, via the left rear satellite playback device, a left rear surround channel at the second magnitude; based on the trigger indication: playing back, via the left rear satellite playback device, the left side surround channel at the third magnitude lower than the first magnitude; and playing back, via the left rear satellite playback device, the left rear surround channel at the fourth magnitude greater than the second magnitude.

28. A method comprising: receiving, at a media playback system comprising a plurality7 of playback devices including a front satellite playback device, multi-channel audio content comprising a front surround channel: playing back the front surround channel via only the front satellite playback device; and based on receiving a trigger indication, playing back the front surround channel via both the front satellite playback device and a center front playback device in synchrony.

29. The method of any preceding claim, wherein the trigger indication comprises one of: a user input; or detection of an environmental acoustics parameter, a device position parameter, or a listener location parameter.

30. The method of claim 28 or 29, further comprising: based on receiving a second trigger indication, decreasing a playback magnitude of the front surround channel via the front satellite playback device and increasing a playback magnitude of the front surround channel via the center front playback device.

31. The method of one of claims 28 to 30, further comprising modifying relative playback magnitudes of the front surround channel via the front satellite playback device and via the center front playback device to affect a perceived width of the audio playback.

32. The method of one of claims 28 to 31, wherein the front surround channel is a front right surround channel, and the front satellite playback device is a front right satellite playback device, the method further comprising: while playing back the front right surround channel via only the front right satellite play back device, synchronously playing back a front left surround channel via only a front left satellite playback device; and while playing back the front right surround channel via both the front right satellite playback device and the center front playback device in synchrony, synchronously playing back the front left surround channel via both the front left satellite playback device and the center front playback device.

33. The method of one of claims 28 to 32, wherein the center front playback device comprises a soundbar having a plurality of transducers.

34. A non-transitory computer-readable media storing instructions thereon that, when executed by one or more processors of a media playback system, cause the media playback system to perform the method of one of claims 28 to 33.

35. A media playback system comprising at least one playback device configured to perform the method of one of claims 28 to 33.

36. A method of playing back audio content by an audio playback device comprising: a forward-firing transducer configured to direct sound along a first sound axis, an up-firing transducer configured to direct sound along a second sound axis that is vertically angled with respect to the first sound axis, and a side-firing transducer or array configured to direct sound along a third axis that is horizontally angled with respect to the first sound axis, the method comprising: receiving, at the playback device, audio input including a vertical content signal; playing back audio based on the vertical content signal via at least the up-firing transducer and the side-firing transducer or array; and playing back a null signal via the forward-firing transducer, wherein the null signal cancels out a portion of the audio played back based on the vertical content signal along the first sound axis.

37. The method of claim 36, performed while the playback device is in a first standalone playback mode, the method further comprising transitioning to a second playback mode in which the playback device is bonded with a second playback device for synchronous playback, and while in the second mode: playing back audio based on the vertical content signal via at least the up-firing transducer; and playing back the null signal via at least the side-firing transducer or array, wherein the null signal cancels out the portion of the vertical content signal along the first sound axis.

38. The method of claim 37, wherein playing back audio based on the vertical content signal via at least the up-firing transducer comprises playing back audio based on the vertical content signal via the up-firing transducer and the forward-firing transducer.

39. The method of one of claims 36 to 38, wherein the null signal is restricted to a frequency band that includes 1 kHz, and optionally at least one of: the frequency band has a bandwidth that is less than about 5.0 kHz; and the frequency band has a bandwidth that is greater than about 0.5 kHz.

40. The method of one of claims 36 to 38, wherein the null signal is restricted to a frequency band that excludes frequencies below a lower threshold frequency, optionally wherein at least one of: the lower threshold frequency is greater than about 200 Hz; and the upper threshold frequency less than about 1.0 kHz.

41. The method of one of claims 36 to 40, wherein the null signal comprises the vertical content signal being phase-shifted such that the null signal destructively interferes with the portion of the audio played back based on the vertical content signal along the first sound axis.

42. The method of one of claims 36 to 41, wherein playing back the null signal via the forward-firing transducer is delayed with respect to playing back audio based on the vertical content signal via at least the up-firing transducer and the side-firing transducer or array.

43. One or more tangible, non-transitory computer-readable media storing instructions that, when executed by one or more processors of a playback device, cause the playback device to perform operations the method of one of claims 36 to 42

44. A playback device comprising: a forward-firing transducer configured to direct sound along a first sound axis; an up-firing transducer configured to direct sound along a second sound axis that is vertically angled with respect to the first sound axis; a side-firing transducer or array configured to direct sound along a third axis that is horizontally angled with respect to the first sound axis; and one or more processors configured to cause the playback device to perform the method of one of claims 36 to 42.

45. A method comprising: receiving a request to form a bonded zone in which a plurality of playback devices including at least first and second playback devices are configured to synchronously play back audio content; determining, for each of the plurality of playback devices: a first playback parameter value for a first playback parameter; and a second playback parameter value for a second playback parameter; and at least one of: based on the first playback parameter value for the first playback device, adjusting the first playback parameter value for the second playback device; based on the second playback parameter value for the second playback device, adjusting the second playback parameter value for the first playback device; and synchronously playing back audio content via the plurality of playback devices in the bonded zone.

46. The method of claim 45, wherein the first playback parameter comprises a characteristic playback magnitude, and wherein the second playback parameter comprises a characteristic playback phase.

47. The method of claim 45 or 46, wherein: adjusting the first playback parameter value for the second playback device comprises adjusting the first playback parameter value for the second playback device to be closer to the first playback parameter value for the first playback device, and adjusting the second playback parameter for the first playback device comprises adjusting the second playback parameter value for the second playback device to be closer to the second playback parameter value for the second playback device.

48. The method of one of claims 45 to 47, further comprising: receiving a request to add a third playback device to the bonded zone; determining, for the first playback parameter, a first playback parameter value for the third playback device; and based on the first playback parameter value for the third playback device, adjusting the first playback parameter value for each of the first playback device and the second playback device.

49. The method of one of claims 45 to 48, further comprising: receiving a request to assign different playback responsibilities within the bonded zone to the first playback device; and based on the different playback responsibilities, adjusting the second playback parameter value for the first playback device.

50. The method of one of claims 45 to 49, further comprising: after determining the first playback parameter values, selecting the first playback device as a first reference device for the first playback parameter: and after determining the second playback parameter values, selecting the second playback device as a second reference device for the second playback parameter.

51. The method of one of claims 45 to 50, further comprising: determining that the first playback device has moved its location; based on determining that the first playback device has moved, selecting a different playback device as the first reference device for the first playback parameter; and based on the first playback parameter value for the first reference device, adjusting the first playback parameter value for at least the first playback device.

52. The method of one of claims 45 to 51, wherein determining, for the first playback parameter, a first playback parameter value for each of the plurality of playback devices comprises capturing audio output via each of the plurality of playback devices via one or more microphones, and analyzing the captured audio output to determine the first playback parameter values.

53. One or more tangible, non-transitory computer-readable media storing instructions that, when executed by one or more processors of a media playback system, cause the media playback system to perform the method of one of claims 45 to 52

54. A media playback system comprising: a plurality of playback devices including at least a first playback device and a second playback device; one or more processors; and data storage having instructions stored thereon that are executable by the one or more processors to cause the media playback system to perform the method of one of claims 45 to 53.

Description:
HOME THEATRE AUDIO PLAYBACK WITH MULTICHANNEL SATELLITE

PLAYBACK DEVICES

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of priority to U.S. Patent Application No. 63/377,895, filed September 30, 2022, to U.S. Patent Application No. 63/377,897, filed September 30, 2022, to U.S. Patent Application No. 63/377,901, filed September 30, 2022, to U.S. Patent Application No. 63/377,905, filed September 30, 2022, and to U.S. Patent Application No. 63/483,469, filed February 6, 2023, each of which is incorporated herein by reference in its entirety.

FIELD OF THE DISCLOSURE

[0002] The present disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.

BACKGROUND

[0003] Options for accessing and listening to digital audio in an out-loud setting were limited until in 2002, when SONOS, Inc. began development of anew type of playback system. Sonos then filed one of its first patent applications in 2003, entitled “Method for Synchronizing Audio Playback between Multiple Networked Devices,” and began offering its first media playback systems for sale in 2005. The Sonos Wireless Home Sound System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a controller (e.g.. smartphone, tablet, computer, voice input device), one can play what she wants in any room having a networked playback device. Media content (e.g., songs, podcasts, video sound) can be streamed to playback devices such that each room with a playback device can play back corresponding different media content. In addition, rooms can be grouped together for synchronous playback of the same media content, and/or the same media content can be heard in all rooms synchronously.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Features, examples, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings, as listed below. A person skilled in the relevant art will understand that the features shown in the drawings are for purposes of illustrations, and variations, including different and/or additional features and arrangements thereof, are possible. [0005] Figure 1A is a partial cutaway view of an environment having a media playback system configured in accordance with examples of the present technology.

[0006] Figure IB is a schematic diagram of the media playback system of Figure 1 A and one or more networks.

[0007] Figure 1C is a block diagram of a playback device.

[0008] Figure ID is a block diagram of a playback device.

[0009] Figure IE is a block diagram of a network microphone device.

[0010] Figure IF is a block diagram of a network microphone device.

[0011] Figure 1G is a block diagram of a play back device.

[0012] Figure 1H is a partially schematic diagram of a control device.

[0013] Figure 2A is a front isometric view of a playback device configured in accordance with examples of the present technology.

[0014] Figure 2B is a front isometric view of the playback device of Figure 3A without a grille.

[0015] Figure 2C is an exploded view of the playback device of Figure 2A.

[0016] Figure 3A is a perspective view of a playback device configured in accordance with examples of the present technology.

[0017] Figure 3B is a transparent view of the playback device of Figure 3A illustrating individual transducers.

[0018] Figure 4A. 4B, 4C. and 4D are diagrams showing an example playback device configuration in accordance with examples of the present technology.

[0019] Figure 5 is a schematic block diagram of a system for distributing audio data to multichannel satellite playback devices in accordance with examples of the present technology'. [0020] Figure 6 is a flow diagram of a method for distributing audio data to multichannel satellite playback devices in accordance with examples of the present technology 7 .

[0021] Figure 7 is a schematic diagram of a system for distributing audio data to multichannel satellite playback devices in accordance with examples of the present technology.

[0022] Figure 8 illustrates a distribution of audio channel among data channels for transmitting audio data to multichannel satellite playback devices in accordance with examples of the present technology.

[0023] Figure 9 illustrates a home theatre environment including a primary playback device and a plurality’ of satellite playback devices in accordance with examples of the present technology. [0024] Figures 10 and 11 are flow diagrams of methods for determining and adjusting playback device parameters in accordance with examples of the present technology.

[0025] Figures 12A and 12B illustrate a home theatre environment including a primary playback device and a plurality of front satellite playback devices in accordance with examples of the present technology.

[0026] Figure 13 illustrates a control device user interface for modifying a width parameter of audio playback in accordance with examples of the present technology.

[0027] Figures 14A and 14B illustrate a home theatre environment including a primary playback device and a plurality of rear satellite playback devices in accordance with examples of the present technology.

[0028] Figure 15 is a flow diagram of a method for modifying a width parameter of audio playback in accordance with examples of the present technology.

[0029] Figures 16A-16C illustrate a home theatre environment including a primary 7 playback device and a plurality of satellite playback devices in accordance with examples of the present technology.

[0030] Figure 17 is a flow diagram of a method for modifying playback parameters to compensate for satellite playback device placement in accordance with examples of the present technology.

[0031] Figure 18 is a schematic illustration of audio playback in accordance with examples of the disclosed technology.

[0032] Figures 19A and 19B are graphs illustrating audio output contributions from various transducers in accordance with examples of the disclosed technology 7 .

[0033] Figures 20A and 20B are graphs illustrating directivity of audio output as measured along different sound axes in accordance with examples of the disclosed technology.

[0034] Figures 21 and 22 are flow diagrams of methods for playing back audio in accordance with examples of the disclosed technology 7 .

[0001] The drawings are for the purpose of illustrating example examples, but those of ordinary skill in the art will understand that the technology 7 disclosed herein is not limited to the arrangements and/or instrumentality shown in the drawings.

DETAILED DESCRIPTION

I. Overview

[0002] Home theatre audio configurations can involve an array of playback devices distributed about the listening environment. In some instances, a primary playback device (e.g.. a soundbar) can be configured to be placed in a front center position of the listening environment, and one or more satellite playback devices can be placed in various positions about the listening environment. Depending on the type of audio content, the number and type of playback devices, and/or user preferences, satellite playback devices may be placed in front right, front left, rear left, rear right, right side, left side, or other suitable positions relative to the intended listening position.

[0003] Typical wireless home theatre approaches assume that individual satellite playback devices output a single audio channel (e.g., a left rear satellite playback device outputs only a left rear audio channel; a right rear satellite playback device outputs only a right rear audio channel). While such single-channel satellite playback devices provide significant benefits over systems that do not utilize satellite playback devices at all, multichannel satellite playback devices can provide additional benefits for the listener. As described in more detail below, by using satellite playback devices capable of outputting multiple audio channels (for example, outputting different audio channels along different sound axes) can provide a more immersive listening experience for the user. Moreover, such multichannel satellite playback devices are better able to capitalize on spatial audio formats (e.g., Dolby Atmos, DTS:X) that allow for a greater number of channels than conventional audio formats.

[0004] The use of such multichannel satellite playback devices presents certain challenges, however. For instance, distributing multiple channels of audio content to satellite devices for playback can be infeasible over a home wireless network due to bandwidth constraints, network traffic congestion, etc. Examples of the present technology can address these and other problems by intelligently down mixing incoming audio data into a smaller number of channels for transmission to multichannel satellite playback devices, which can then play back the received audio data for synchronous playback with other playback devices within the environment. In some implementations, the multichannel satellite playback devices can upmix the received audio before playback, while in certain implementations the multichannel satellite playback devices can play back the downmixed audio using arraying techniques that facilitate reproducing, to the extent possible, the original number of channels. The parameters of the downmixing and/or upmixing can be based on, for instance, similarities among two or more audio channels, typical audio channel content, playback device characteristics, playback device placement, room acoustics, listener location, media content, netw ork conditions, or any other suitable conditions. [0005] Another challenge can arise when using multichannel satellite playback devices in conjunction with a soundbar or other similar device configured to output multiple channels. A soundbar (or other suitable primary playback device) typically handles playback responsibilities for at least the front left, center, and front right channels (and optionally left side surround and right side surround in some instances). When front left, center, and front right channels are all output by a single playback device such as a soundbar, playback parameters such as phase and magnitudes of the audio output are inherently seamless across all channels. However, when front left and front right channels are instead or additionally output via discrete front satellite playback devices, there is a risk of mismatch of playback parameters between the devices that can deleteriously affect the user's listening experience. Examples of the present technology can address these and other problems by performing a calibration process among devices within the home theatre zone to determine certain playback parameters (e.g., phase response, magnitude response) of individual playback devices within the zone. Based on these individual parameters, the parameters of one or more of the devices can be adjusted to match those of the other device(s). In some instances, a particular playback device can be selected as the reference device for a given parameter, and the other playback devices within the zone can have their playback modified (e.g., by adjusting a phase response, a magnitude response, etc.) to match that of the reference device. As a result, a more consistent output among the various playback devices can be achieved.

[0006] In some instances, using multichannel satellite playback devices can achieve a greater perceived width of audio playback, which can increase the immersiveness of the listening experience. Because multichannel satellite playback devices are capable of outputting audio along a plurality of sound axes, the perceived width of audio playback can be modified by selectively distributing playback responsibilities between the multichannel satellite playback devices and a primary playback device (e g., a soundbar). Additionally or alternatively, the perceived width of audio playback can be modified by selectively distributing playback responsibilities between the various sound axes of the multichannel satellite playback devices. In some examples, the width of audio playback can be directly controlled by a user, or can be dynamically adjusted in response to certain detected parameters.

[0007] In some cases, a user's placement of multichannel satellite playback devices around her environment may differ from the intended placement, either in terms of device location or device orientation. As a result, the audio output by the multichannel satellite playback devices can have unintended properties, such as a side surround channel being directed too far forward or too far rearward of an intended listening location. Examples of the present technology can address these and other problems by modifying playback parameters of multichannel satellite playback devices to compensate for their placement within the environment. As a result, even when a user places multichannel audio satellite playback devices in undesirable locations or orientations, the system can adapt playback to provide an improved listening experience.

[0008] Additional aspects of the present technology relate to improving directivity of output for audio transducers, such as those including up-firing transducers. Conventional surround sound audio rendering formats include a plurality of channels configured to represent different lateral positions with respect to a listener (e.g., front, right, left). More recently, three- dimensional (3D) or other immersive audio rendering formats have been developed that include one or more vertical or height channels in addition to any lateral channels. Examples of such 3D audio formats include DOLBY ATMOS, MPEG-H, and DTS:X formats. Such 3D audio rendering formats may include one or more vertical channels configured to represent sounds originating from above a listener. In some instances, such vertical channels can be played back via transducers positioned over a user's head (e.g., ceiling mounted speakers). In the case of soundbars or other multi-transducer devices, an upwardly oriented transducer (herein referred to as an “up-firing transducer”) can output audio along a sound axis that is at least partially vertically oriented with respect to a forward horizontal plane of a playback device. This audio output can reflect off an acoustically reflective surface (e.g., a ceiling) to be directed toward a listener at a target location. Because the listener perceives the audio as originating from point of reflection on the ceiling, the psychoacoustic perception is that the sound originates “above” the listener. In the case of vertical side audio content (e.g., a left front height channel), the content may be output by an array of transducers including at least one up-firing transducer and at least one side-firing transducer and/or forward-firing transducer, depending on the orientation of the playback device. This approach can be particularly useful when vertical content is played back via discrete satellite devices, such as front left, front right, rear left, or rear right satellite playback devices that are equipped with one or more up-firing transducers.

[0009] Although up-firing transducers or arrays can usefully enable a listener to localize a sound overhead, the effect may be reduced when a substantial portion of the audio content output by such up-firing transducers propagates in the forward direction, sometimes referred to as forward “leakage.” This effect can be particularly pronounced over certain frequency ranges. Many full-range transducers output midrange and lower frequency sound (e.g., sound at approximately 1.5 kHz or less) substantially omnidirectionally, particularly in the case of transducers having relatively small diameter (e.g. 4” or smaller). This may be true even if the transducer outputs high frequency sound (e.g., above 1.5 kHz) in a directional manner. As a result, a vertically oriented up-firing transducer may output audio in a manner such that, while a high frequency portion of the output propagates along the vertically oriented axis and reflects off a ceiling to a listener, a mid- or low-frequency portion of the output propagates with less directivity, including propagating along a horizontal axis directly towards the listener without first reflecting off the ceiling. Since at least some of the mid- or low-frequency portion ’leaks” along the horizontal direction, the listener’s perception of audio output from the up-firing transducer is a combination of the (full-range) output reflected off the ceiling and the mid- and low-frequency output that propagates horizontally from the up-firing transducer. Moreover, the leaked portion will typically reach the listener first since its path length is almost always shorter than that of the reflected output. As a result, the listener may localize the source of the audio output as being the up-firing transducer rather than the reflection point on the ceiling, thereby degrading the immersive audio experience.

[0010] Examples of the disclosed technology may address these and other shortcomings by outputting a ’‘null signal” that is configured to at least partially cancel out the undesirable leakage of vertical content along the forward (or other lateral) direction. For instance, simultaneously with outputting vertical channel content via an up-firing transducer (or via an array including an up-firing transducer and one or more side-firing transducers), a forwardfiring transducer can output a null signal that destructively interferes with the vertical content along the forward sound axis, thereby reducing the amount of height channel content that reaches a listener along the forward sound axis. The null signal can be generated by phaseshifting the vertical content signal and synchronizing the output such that the null signal destructively interferes with the vertical content output along the forward sound axis. In some implementations, the null signal can be output by an array of transducers, which can include one or more forward-firing transducers and/or one or more side-firing transducers (e.g., a transducer oriented and configured to output audio primarily along a sound axis that is laterally angled with respect to the forward axis of the playback device). In various implementations, the null signal can be restricted to a particular frequency range, for instance between about 500 Hz to about 2.5 kHz, or any suitable frequency range for a given application and configuration of playback devices. This approach may be suitable as higher frequency audio output (e.g., frequencies greater than about 1.5 kHz. 2.0 kHz, 2.5 kHz, or higher) via typical transducers tends to be more directional and thus is less susceptible to forward leakage. [0011] In some examples, as a result of the null signal, the sound pressure level (SPL) of the vertical content that propagates along the forward axis is at least 5 dB less (e.g., 10 dB less) than the SPL of the vertical content that propagates along the up-firing axis (e g., an axis oriented upwardly at an oblique angle such as +70 degrees from the forward axis). To ensure that the null signal played back via the forward-firing transducer is substantially aligned with the vertical content played back via the up-firing transducer (and optionally via one or more side-firing transducers), the null signal can be time delayed with respect to the output of the vertical content via the up-firing transducer or array. This delay can be configured to compensate for the different path length that the null signal takes to reach the listener (e.g., propagating from the forward-firing transducer) as compared to the vertical content (e.g. propagating from the up-firing transducer or array).

[0012] By increasing the directivity of vertical content output (e.g., by canceling out a portion of the forward leakage of such vertical content), the listener’s perceived localization of the vertical content can be markedly improved, for instance with less localization on the playback device itself. The net result is enhanced immersiveness, with the user more reliably localizing vertical audio content at an overhead position, notwithstanding the tendency for some vertical content to ‘'leak” along the horizontal direction from an up-firing transducer or array.

[0013] While some examples described herein may refer to functions performed by given actors such as "‘users,” “listeners,” and/or other entities, it should be understood that this is for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves.

[0014] In the Figures, identical reference numbers identify generally similar, and/or identical, elements. To facilitate the discussion of any particular element, the most significant digit or digits of a reference number refers to the Figure in which that element is first introduced. For example, element 110a is first introduced and discussed with reference to Figure 1 A. Many of the details, dimensions, angles and other features show n in the Figures are merely illustrative of particular examples of the disclosed technology. Accordingly, other examples can have other details, dimensions, angles and features without departing from the spirit or scope of the disclosure. In addition, those of ordinary skill in the art will appreciate that further examples of the various disclosed technologies can be practiced without several of the details described below . II. Suitable Operating Environment

[0015] Figure 1A is a partial cutaway view of a media playback system 100 distributed in an environment 101 (e.g., a house). The media playback system 100 comprises one or more playback devices 110 (identified individually as playback devices HOa-n), one or more network microphone devices (“NMDs”), 120 (identified individually as NMDs 120a-c), and one or more control devices 130 (identified individually as control devices 130a and 130b).

[0016] As used herein the term '‘playback device” can generally refer to a network device configured to receive, process, and output data of a media playback system. For example, a playback device can be a network device that receives and processes audio content. In some examples, a playback device includes one or more transducers or speakers powered by one or more amplifiers. In other examples, however, a playback device includes one of (or neither of) the speaker and the amplifier. For instance, a playback device can comprise one or more amplifiers configured to drive one or more speakers external to the playback device via a corresponding wire or cable.

[0017] Moreover, as used herein the term NMD (i.e., a “network microphone device”) can generally refer to a network device that is configured for audio detection. In some examples, an NMD is a stand-alone device configured primarily for audio detection. In other examples, an NMD is incorporated into a playback device (or vice versa).

[0018] The term “control device” can generally refer to a network device configured to perform functions relevant to facilitating user access, control, and/or configuration of the media playback system 100.

[0019] Each of the playback devices 110 is configured to receive audio signals or data from one or more media sources (e.g., one or more remote servers, one or more local devices) and play back the received audio signals or data as sound. The one or more NMDs 120 are configured to receive spoken word commands, and the one or more control devices 130 are configured to receive user input. In response to the received spoken word commands and/or user input, the media playback system 100 can play back audio via one or more of the playback devices 110. In certain examples, the playback devices 110 are configured to commence playback of media content in response to a trigger. For instance, one or more of the playback devices 110 can be configured to play back a morning playlist upon detection of an associated trigger condition (e.g., presence of a user in a kitchen, detection of a coffee machine operation). In some examples, for instance, the media playback system 100 is configured to play back audio from a first playback device (e.g., the playback device 110a) in synchrony with a second playback device (e.g., the playback device 110b). Interactions between the playback devices 110, NMDs 120. and/or control devices 130 of the media playback system 100 configured in accordance with the various examples of the disclosure are described in greater detail below.

[0020] In the illustrated example of Figure 1A, the environment 101 comprises a household having several rooms, spaces, and/or playback zones, including (clockwise from upper left) a master bathroom 101a. a master bedroom 101b, a second bedroom 101c, a family room or den lOld, an office lOle, a living room 10 If, a dining room 101g, a kitchen lOlh, and an outdoor patio lOli. While certain examples and examples are described below in the context of a home environment, the technologies described herein may be implemented in other types of environments. In some examples, for instance, the media playback system 100 can be implemented in one or more commercial settings (e.g., a restaurant, mall, airport, hotel, a retail or other store), one or more vehicles (e.g., a sports utility vehicle, bus, car, a ship, a boat, an airplane), multiple environments (e.g., a combination of home and vehicle environments), and/or another suitable environment where multi -zone audio may be desirable.

[0021] The media playback system 100 can comprise one or more playback zones, some of which may correspond to the rooms in the environment 101. The media playback system 100 can be established with one or more playback zones, after which additional zones may be added, or removed to form, for example, the configuration shown in Figure 1 A. Each zone maybe given a name according to a different room or space such as the office lOle, master bathroom 101a, master bedroom 101b, the second bedroom 101c, kitchen lOlh, dining room 101g, living room lOlf, and/or the balcony 7 lOli. In some examples, a single playback zone may include multiple rooms or spaces. In certain examples, a single room or space may include multiple playback zones.

[0022] In the illustrated example of Figure 1A, the master bathroom 101a, the second bedroom 101c, the office lOle, the living room 101 f, the dining room 101g, the kitchen lOlh, and the outdoor patio lOli each include one playback device 110, and the master bedroom 101b and the den 101 d include a plurality of playback devices 110. In the master bedroom 101b, the playback devices 1101 and 110m may be configured, for example, to play back audio content in synchrony as individual ones of playback devices 110, as a bonded playback zone, as a consolidated playback device, and/or any combination thereof. Similarly, in the den 101 d, the playback devices HOh-j can be configured, for instance, to play back audio content in synchrony as individual ones of playback devices 110, as one or more bonded playback devices, and/or as one or more consolidated playback devices. Additional details regarding bonded and consolidated playback devices are described below with respect to Figures IB and IE.

[0023] In some examples, one or more of the playback zones in the environment 101 may each be playing different audio content. For instance, a user may be grilling on the patio lOli and listening to hip hop music being played by the playback device 110c while another user is preparing food in the kitchen lOlh and listening to classical music played by the playback device 110b. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the office lOle listening to the playback device 1 lOf playing back the same hip hop music being played back by playback device 110c on the patio lOli. In some examples, the playback devices 110c and 11 Of play back the hip hop music in synchrony such that the user perceives that the audio content is being played seamlessly (or at least substantially seamlessly) while moving between different playback zones. Additional details regarding audio playback synchronization among playback devices and/or zones can be found, for example, in U. S. Patent No. 8,234,395 entitled, “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices,” which is incorporated herein by reference in its entirety. a. Suitable Media Playback System

[0024] Figure IB is a schematic diagram of the media playback sy stem 100 and a cloud network 102. For ease of illustration, certain devices of the media playback system 100 and the cloud network 1 2 are omitted from Figure 1 B. One or more communication links 103 (referred to hereinafter as “the links 103”) communicatively couple the media playback system 100 and the cloud network 102.

[0025] The links 103 can comprise, for example, one or more wired networks, one or more wireless networks, one or more wide area networks (WAN), one or more local area networks (LAN), one or more personal area networks (PAN), one or more telecommunication networks (e.g., one or more Global System for Mobiles (GSM) networks, Code Division Multiple Access (CDMA) networks, Long-Term Evolution (LTE) networks, 5G communication network networks, and/or other suitable data transmission protocol networks), etc. The cloud network 102 is configured to deliver media content (e.g., audio content, video content, photographs, social media content) to the media playback system 100 in response to a request transmitted from the media playback system 100 via the links 103. In some examples, the cloud network 102 is further configured to receive data (e g. voice input data) from the media playback system 100 and correspondingly transmit commands and/or media content to the media playback system 100.

[0026] The cloud network 102 comprises computing devices 106 (identified separately as a first computing device 106a, a second computing device 106b, and a third computing device 106c). The computing devices 106 can comprise individual computers or servers, such as, for example, a media streaming service server storing audio and/or other media content, a voice service server, a social media server, a media playback system control server, etc. In some examples, one or more of the computing devices 106 comprise modules of a single computer or server. In certain examples, one or more of the computing devices 106 comprise one or more modules, computers, and/or servers. Moreover, while the cloud network 102 is described above in the context of a single cloud network, in some examples the cloud network 102 comprises a plurality of cloud networks comprising communicatively coupled computing devices. Furthermore, while the cloud network 102 is shown in Figure IB as having three of the computing devices 106, in some examples, the cloud network 102 comprises fewer (or more than) three computing devices 106.

[0027] The media playback system 100 is configured to receive media content from the networks 102 via the links 103. The received media content can comprise, for example, a Uniform Resource Identifier (URI) and/or a Uniform Resource Locator (URL). For instance, in some examples, the media playback system 100 can stream, download, or otherwise obtain data from a URI or a URL corresponding to the received media content. A network 104 communicatively couples the links 103 and at least a portion of the devices (e.g., one or more of the playback devices 110, NMDs 120, and/or control devices 130) of the media playback system 100. The network 104 can include, for example, a wireless network (e g., a WiFi network, a Bluetooth, a Z-Wave network, a ZigBee. and/or other suitable wireless communication protocol network) and/or a wired network (e g., a network comprising Ethernet, Universal Serial Bus (USB), and/or another suitable wired communication). As those of ordinary skill in the art will appreciate, as used herein, “WiFi” can refer to several different communication protocols including, for example, Institute of Electrical and Electronics Engineers (IEEE) 802.11a, 802.11b, 802.11g, 802.1 In, 802.1 lac, 802.1 lac, 802. Had, 802.11af, 802.11ah, 802.11ai, 802.11aj, 802. Haq, 802.11ax, 802. Hay, 802.15, etc. transmitted at 2.4 Gigahertz (GHz), 5 GHz, and/or another suitable frequency .

[0028] In some examples, the network 104 comprises a dedicated communication network that the media playback system 100 uses to transmit messages between individual devices and/or to transmit media content to and from media content sources (e.g., one or more of the computing devices 106). In certain examples, the network 104 is configured to be accessible only to devices in the media playback system 100, thereby reducing interference and competition with other household devices. In other examples, however, the network 104 comprises an existing household communication network (e.g., a household WiFi network). In some examples, the links 103 and the network 104 comprise one or more of the same networks. In some examples, for instance, the links 103 and the network 104 comprise a telecommunication network (e.g., an LTE network, a 5G network). Moreover, in some examples, the media playback system 100 is implemented without the network 104, and devices comprising the media playback system 100 can communicate with each other, for example, via one or more direct connections, PANs, telecommunication networks, and/or other suitable communication links.

[0029] In some examples, audio content sources may be regularly added or removed from the media playback system 100. In some examples, for instance, the media playback system 100 performs an indexing of media items when one or more media content sources are updated, added to, and/or removed from the media playback system 100. The media playback system 100 can scan identifiable media items in some or all folders and/or directories accessible to the playback devices 110, and generate or update a media content database comprising metadata (e.g., title, artist, album, track length) and other associated information (e.g., URIs. URLs) for each identifiable media item found. In some examples, for instance, the media content database is stored on one or more of the playback devices 110, network microphone devices 120, and/or control devices 130.

[0030] In the illustrated example of Figure IB, the playback devices 1101 and 110m comprise a group 107a. The playback devices 1101 and 110m can be positioned in different rooms in a household and be grouped together in the group 107a on a temporary or permanent basis based on user input received at the control device 130a and/or another control device 130 in the media playback system 100. When arranged in the group 107a, the playback devices 1101 and 110m can be configured to play back the same or similar audio content in synchrony from one or more audio content sources. In certain examples, for instance, the group 107a comprises a bonded zone in which the playback devices 1101 and 110m comprise left audio and right audio channels, respectively, of multi-channel audio content, thereby producing or enhancing a stereo effect of the audio content. In some examples, the group 107a includes additional playback devices 110. In other examples, however, the media playback system 100 omits the group 107a and/or other grouped arrangements of the playback devices 110.

[0031] The media playback system 100 includes the NMDs 120a and 120d, each comprising one or more microphones configured to receive voice utterances from a user. In the illustrated example of Figure IB, the NMD 120a is a standalone device and the NMD 120d is integrated into the playback device HOn. The NMD 120a, for example, is configured to receive voice input 121 from a user 123. In some examples, the NMD 120a transmits data associated with the received voice input 121 to a voice assistant service (VAS) configured to (i) process the received voice input data and (ii) transmit a corresponding command to the media playback system 100. In some examples, for instance, the computing device 106c comprises one or more modules and/or servers of a VAS (e.g.. a VAS operated by one or more of SONOS®, AMAZON®, GOOGLE® APPLE®, MICROSOFT®). The computing device 106c can receive the voice input data from the NMD 120a via the network 104 and the links 103. In response to receiving the voice input data, the computing device 106c processes the voice input data (i.e., "‘Play Hey Jude by The Beatles"), and determines that the processed voice input includes a command to play a song (e.g., “Hey Jude’’). The computing device 106c accordingly transmits commands to the media playback system 100 to play back “Hey Jude” by the Beatles from a suitable media service (e.g., via one or more of the computing devices 106) on one or more of the playback devices 110. b. Suitable Playback Devices

[0032] Figure 1C is a block diagram of the playback device 110a comprising an input/output 111. The input/output 111 can include an analog I/O I l la (e.g., one or more wires, cables, and/or other suitable communication links configured to cany' analog signals) and/or a digital I/O 11 lb (e.g., one or more wires, cables, or other suitable communication links configured to carry digital signals). In some examples, the analog I/O I l la is an audio line-in input connection comprising, for example, an auto-detecting 3.5mm audio line-in connection. In some examples, the digital I/O 111b comprises a Sony /Philips Digital Interface Format (S/PDIF) communication interface and/or cable and/or a Toshiba Link (TOSLINK) cable. In some examples, the digital I/O 111b comprises a High-Definition Multimedia Interface (HDMI) interface and/or cable. In some examples, the digital I/O 111b includes one or more wireless communication links comprising, for example, a radio frequency (RF), infrared, WiFi, Bluetooth, or another suitable communication protocol. In certain examples, the analog I/O 11 la and the digital 11 lb comprise interfaces (e.g., ports, plugs, jacks) configured to receive connectors of cables transmitting analog and digital signals, respectively, without necessarily including cables.

[0033] The playback device 110a, for example, can receive media content (e.g., audio content comprising music and/or other sounds) from a local audio source 105 via the input/output 111 (e.g., a cable, a wire, a PAN, a Bluetooth connection, an ad hoc wired or wireless communication network, and/or another suitable communication link). The local audio source 105 can comprise, for example, a mobile device (e.g., a smartphone, a tablet, a laptop computer) or another suitable audio component (e.g., a television, a desktop computer, an amplifier, a phonograph, a Blu-ray player, a memory 7 storing digital media files). In some examples, the local audio source 105 includes local music libraries on a smartphone, a computer, a networked-attached storage (NAS), and/or another suitable device configured to store media files. In certain examples, one or more of the playback devices 110, NMDs 120, and/or control devices 130 comprise the local audio source 105. In other examples, however, the media playback system omits the local audio source 105 altogether. In some examples, the playback device 110a does not include an input/output 111 and receives all audio content via the network 104.

[0034] The playback device 110a further comprises electronics 112, a user interface 113 (e.g., one or more buttons, knobs, dials, touch-sensitive surfaces, displays, touchscreens), and one or more transducers 114 (referred to hereinafter as “the transducers 114”). The electronics 112 is configured to receive audio from an audio source (e.g., the local audio source 105) via the input/output 111, one or more of the computing devices 106a-c via the network 104 (Figure IB)), amplify the received audio, and output the amplified audio for playback via one or more of the transducers 114. In some examples, the playback device 110a optionally includes one or more microphones 115 (e.g., a single microphone, a plurality of microphones, a microphone array) (hereinafter referred to as “the microphones 1 15”). In certain examples, for instance, the playback device 110a having one or more of the optional microphones 115 can operate as an NMD configured to receive voice input from a user and correspondingly perform one or more operations based on the received voice input.

[0035] In the illustrated example of Figure 1C, the electronics 112 comprise one or more processors 112a (referred to hereinafter as “the processors 112a”), memory 112b, software components 112c, a netw ork interface 112d, one or more audio processing components 112g (referred to hereinafter as “the audio components 112g”), one or more audio amplifiers 112h (referred to hereinafter as “the amplifiers 112h”), and power 112i (e.g., one or more power supplies, power cables, power receptacles, batteries, induction coils, Power-over Ethernet (POE) interfaces, and/or other suitable sources of electric power). In some examples, the electronics 112 optionally include one or more other components 112j (e.g., one or more sensors, video displays, touchscreens, battery charging bases).

[0036] The processors 112a can comprise clock-driven computing component(s) configured to process data, and the memory 112b can comprise a computer-readable medium (e.g., a tangible, non-transitory computer-readable medium, data storage loaded with one or more of the software components 112c) configured to store instructions for performing various operations and/or functions. The processors 112a are configured to execute the instructions stored on the memory 112b to perform one or more of the operations. The operations can include, for example, causing the playback device 110a to retrieve audio data from an audio source (e.g., one or more of the computing devices 106a-c (Figure IB)), and/or another one of the playback devices 110. In some examples, the operations further include causing the playback device 110a to send audio data to another one of the playback devices 110a and/or another device (e.g., one of the NMDs 120). Certain examples include operations causing the playback device 110a to pair with another of the one or more playback devices 110 to enable a multi-channel audio environment (e.g., a stereo pair, a bonded zone).

[0037] The processors 112a can be further configured to perform operations causing the playback device 110a to synchronize playback of audio content with another of the one or more playback devices 110. As those of ordinary skill in the art will appreciate, during synchronous playback of audio content on a plurality of playback devices, a listener will preferably be unable to perceive time-delay differences between playback of the audio content by the playback device 110a and the other one or more other playback devices 110. Additional details regarding audio playback synchronization among playback devices can be found, for example, in U.S. Patent No. 8,234,395, which was incorporated by reference above.

[0038] In some examples, the memory 112b is further configured to store data associated with the play back device 110a, such as one or more zones and/or zone groups of which the playback device 110a is a member, audio sources accessible to the playback device 110a, and/or a playback queue that the playback device 110a (and/or another of the one or more playback devices) can be associated with. The stored data can comprise one or more state variables that are periodically updated and used to describe a state of the playback device 110a. The memory 112b can also include data associated with a state of one or more of the other devices (e.g., the playback devices 110, NMDs 120, control devices 130) of the media playback system 100. In some examples, for instance, the state data is shared during predetermined intervals of time (e.g., every 5 seconds, every 10 seconds, every 60 seconds) among at least a portion of the devices of the media playback system 100, so that one or more of the devices have the most recent data associated with the media playback system 100.

[0039] The network interface 112d is configured to facilitate a transmission of data between the playback device 110a and one or more other devices on a data network such as. for example, the links 103 and/or the network 104 (Figure IB). The network interface 112d is configured to transmit and receive data corresponding to media content (e.g., audio content, video content, text, photographs) and other signals (e.g., non-transitory signals) comprising digital packet data including an Internet Protocol (IP)-based source address and/or an IP-based destination address. The network interface 112d can parse the digital packet data such that the electronics 112 properly receives and processes the data destined for the playback device 110a.

[0040] In the illustrated example of Figure 1C, the network interface 112d comprises one or more wireless interfaces 112e (referred to hereinafter as “the wireless interface 112e”). The wireless interface 112e (e.g., a suitable interface comprising one or more antennae) can be configured to wirelessly communicate with one or more other devices (e.g., one or more of the other playback devices 110, NMDs 120, and/or control devices 130) that are communicatively coupled to the netw ork 104 (Figure IB) in accordance with a suitable wireless communication protocol (e.g., WiFi, Bluetooth, LTE). In some examples, the network interface 112d optionally includes a wired interface 112f (e.g., an interface or receptacle configured to receive a network cable such as an Ethernet, a USB-A, USB-C, and/or Thunderbolt cable) configured to communicate over a wired connection with other devices in accordance with a suitable wired communication protocol. In certain examples, the network interface 112d includes the wired interface 112f and excludes the wireless interface 112e. In some examples, the electronics 112 excludes the network interface 112d altogether and transmits and receives media content and/or other data via another communication path (e.g., the input/output 111).

[0041] The audio components 112g are configured to process and/or filter data comprising media content received by the electronics 112 (e.g., via the input/output 111 and/or the network interface 112d) to produce output audio signals. In some examples, the audio processing components 112g comprise, for example, one or more digital -to-analog converters (DAC), audio preprocessing components, audio enhancement components, a digital signal processors (DSPs), and/or other suitable audio processing components, modules, circuits, etc. In certain examples, one or more of the audio processing components 112g can comprise one or more subcomponents of the processors 112a. In some examples, the electronics 112 omits the audio processing components 112g. In some examples, for instance, the processors 112a execute instructions stored on the memory 112b to perform audio processing operations to produce the output audio signals.

[0042] The amplifiers 112h are configured to receive and amplify the audio output signals produced by the audio processing components 112g and/or the processors 112a. The amplifiers 112h can comprise electronic devices and/or components configured to amplify audio signals to levels sufficient for driving one or more of the transducers 114. In some examples, for instance, the amplifiers 112h include one or more switching or class-D power amplifiers. In other examples, however, the amplifiers include one or more other types of power amplifiers (e.g., linear gain power amplifiers, class-A amplifiers, class-B amplifiers, class-AB amplifiers, class-C amplifiers, class-D amplifiers, class-E amplifiers, class-F amplifiers, class-G and/or class H amplifiers, and/or another suitable type of power amplifier). In certain examples, the amplifiers 112h comprise a suitable combination of two or more of the foregoing types of power amplifiers. Moreover, in some examples, individual ones of the amplifiers 112h correspond to individual ones of the transducers 114. In other examples, however, the electronics 112 includes a single one of the amplifiers 112h configured to output amplified audio signals to a plurality of the transducers 114. In some other examples, the electronics 112 omits the amplifiers 112h.

[0043] The transducers 114 (e.g., one or more speakers and/or speaker drivers) receive the amplified audio signals from the amplifier 112h and render or output the amplified audio signals as sound (e.g., audible sound waves having a frequency between about 20 Hertz (Hz) and 20 kilohertz (kHz)). In some examples, the transducers 114 can comprise a single transducer. In other examples, however, the transducers 114 comprise a plurality of audio transducers. In some examples, the transducers 114 comprise more than one type of transducer. For example, the transducers 114 can include one or more low frequency transducers (e.g., subwoofers, woofers), mid-range frequency transducers (e.g., mid-range transducers, midwoofers), and one or more high frequency transducers (e.g., one or more tweeters). As used herein, '‘low frequency” can generally refer to audible frequencies below about 500 Hz, ‘'midrange frequency” can generally refer to audible frequencies between about 500 Hz and about 2 kHz, and “high frequency” can generally refer to audible frequencies above 2 kHz. In certain examples, however, one or more of the transducers 114 comprise transducers that do not adhere to the foregoing frequency ranges. For example, one of the transducers 114 may comprise a mid-woofer transducer configured to output sound at frequencies between about 200 Hz and about 5 kHz.

[0044] By way of illustration, SONOS, Inc. presently offers (or has offered) for sale certain playback devices including, for example, a “SONOS ONE,” “MOVE,” “PLAY:5,” “BEAM,” “PLAYBAR,” “PLAYBASE,” “PORT,” “BOOST,” “AMP,” and “SUB.” Other suitable playback devices may additionally or alternatively be used to implement the playback devices of example examples disclosed herein. Additionally, one of ordinary skilled in the art will appreciate that a playback device is not limited to the examples described herein or to SONOS product offerings. In some examples, for instance, one or more playback devices 110 comprises wired or wireless headphones (e.g., over-the-ear headphones, on-ear headphones, in-ear earphones). In other examples, one or more of the playback devices 110 comprise a docking station and/or an interface configured to interact with a docking station for personal mobile media playback devices. In certain examples, a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use. In some examples, a playback device omits a user interface and/or one or more transducers. For example, FIG. ID is a block diagram of a playback device I lOp comprising the input/output 111 and electronics 112 without the user interface 113 or transducers 114.

[0045] Figure IE is a block diagram of a bonded playback device HOq comprising the playback device 110a (Figure 1C) sonically bonded with the playback device HOi (e.g., a subwoofer) (Figure 1A). In the illustrated example, the playback devices 110a and 1101 are separate ones of the playback devices 110 housed in separate enclosures. In some examples, however, the bonded playback device HOq comprises a single enclosure housing both the playback devices 110a and HOi. The bonded playback device HOq can be configured to process and reproduce sound differently than an unbonded playback device (e.g.. the playback device 110a of Figure 1C) and/or paired or bonded playback devices (e g., the playback devices 1101 and 110m of Figure IB). In some examples, for instance, the playback device 110a is fullrange playback device configured to render low frequency, mid-range frequency, and high frequency audio content, and the playback device 1 lOi is a subwoofer configured to render low frequency audio content. In some examples, the playback device 110a, when bonded with the first playback device, is configured to render only the mid-range and high frequency components of a particular audio content, while the playback device HOi renders the low- frequency component of the particular audio content. In some examples, the bonded playback device HOq includes additional playback devices and/or another bonded playback device. Additional playback device examples are described in further detail below with respect to Figures 2A-2C. c. Suitable Network Microphone Devices (NMDs)

[0046] Figure IF is a block diagram of the NMD 120a (Figures 1A and IB). The NMD 120a includes one or more voice processing components 124 (hereinafter “the voice components 124”) and several components described with respect to the playback device 110a (Figure 1C) including the processors 1 12a, the memory 112b, and the microphones 115. The NMD 120a optionally comprises other components also included in the playback device 110a (Figure 1C), such as the user interface 113 and/or the transducers 114. In some examples, the NMD 120a is configured as a media playback device (e.g., one or more of the playback devices 110), and further includes, for example, one or more of the audio components 112g (Figure 1C), the amplifiers 114, and/or other playback device components. In certain examples, the NMD 120a comprises an Internet of Things (loT) device such as, for example, a thermostat, alarm panel, fire and/or smoke detector, etc. In some examples, the NMD 120a comprises the microphones 115, the voice processing components 124, and only a portion of the components of the electronics 112 described above with respect to Figure IB. In some examples, for instance, the NMD 120a includes the processor 112a and the memory 112b (Figure IB), while omitting one or more other components of the electronics 112. In some examples, the NMD 120a includes additional components (e.g., one or more sensors, cameras, thermometers, barometers, hygrometers).

[0047] In some examples, an NMD can be integrated into a playback device. Figure 1G is a block diagram of a playback device 1 lOr comprising an NMD 120d. The playback device 1 lOr can comprise many or all of the components of the playback device 110a and further include the microphones 115 and voice processing components 124 (Figure IF). The playback device HOr optionally includes an integrated control device 130c. The control device 130c can comprise, for example, a user interface (e.g., the user interface 113 of Figure IB) configured to receive user input (e.g., touch input, voice input) without a separate control device. In other examples, however, the playback device 11 Or receives commands from another control device (e.g., the control device 130a of Figure IB).

[0048] Referring again to Figure IF, the microphones 115 are configured to acquire, capture, and/or receive sound from an environment (e.g., the environment 101 of Figure 1A) and/or a room in which the NMD 120a is positioned. The received sound can include, for example, vocal utterances, audio played back by the NMD 120a and/or another playback device. background voices, ambient sounds, etc. The microphones 115 convert the received sound into electrical signals to produce microphone data. The voice processing components 124 receive and analyzes the microphone data to determine whether a voice input is present in the microphone data. The voice input can comprise, for example, an activation word followed by an utterance including a user request. As those of ordinary skill in the art will appreciate, an activation word is a word or other audio cue that signifying a user voice input. For instance, in querying the AMAZON® VAS, a user might speak the activation word ‘'Alexa.’’ Other examples include “Ok, Google” for invoking the GOOGLE® VAS and “Hey, Siri” for invoking the APPLE® VAS.

[0049] After detecting the activation word, voice processing components 124 monitor the microphone data for an accompanying user request in the voice input. The user request may include, for example, a command to control a third-party device, such as a thermostat (e.g., NEST® thermostat), an illumination device (e.g., a PHILIPS HUE ® lighting device), or a media playback device (e.g., a Sonos® playback device). For example, a user might speak the activation word “Alexa” followed by the utterance “set the thermostat to 68 degrees” to set a temperature in a home (e.g., the environment 101 of Figure 1A). The user might speak the same activation word followed by the utterance “turn on the living room” to turn on illumination devices in a living room area of the home. The user may similarly speak an activation word followed by a request to play a particular song, an album, or a playlist of music on a playback device in the home. d. Suitable Control Devices

[0050] Figure 1H is a partially schematic diagram of the control device 130a (Figures 1A and IB). As used herein, the term “control device” can be used interchangeably with “controller” or “control system.” Among other features, the control device 130a is configured to receive user input related to the media playback system 100 and, in response, cause one or more devices in the media playback system 100 to perform an action(s) or operation(s) corresponding to the user input. In the illustrated example, the control device 130a comprises a smartphone (e.g.. an iPhone™ an Android phone) on which media playback system controller application software is installed. In some examples, the control device 130a comprises, for example, a tablet (e.g., an iPad™), a computer (e.g., a laptop computer, a desktop computer), and/or another suitable device (e.g., a television, an automobile audio head unit, an loT device). In certain examples, the control device 130a comprises a dedicated controller for the media playback system 100. In other examples, as described above with respect to Figure 1G. the control device 130a is integrated into another device in the media playback system 100 (e.g., one more of the playback devices 110. NMDs 120, and/or other suitable devices configured to communicate over a network).

[0051] The control device 130a includes electronics 132, a user interface 133, one or more speakers 134, and one or more microphones 135. The electronics 132 comprise one or more processors 132a (referred to hereinafter as "‘the processors 132a”). a memory 132b, software components 132c, and a network interface 132d. The processor 132a can be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100. The memory 132b can comprise data storage that can be loaded with one or more of the software components executable by the processor 132a to perform those functions. The software components 132c can comprise applications and/or other executable software configured to facilitate control of the media playback system 100. The memory 112b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user.

[0052] The network interface 132d is configured to facilitate network communications between the control device 130a and one or more other devices in the media playback system 100, and/or one or more remote devices. In some examples, the network interface 132d is configured to operate according to one or more suitable communication industry standards (e.g., infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11b, 802.11g, 802.1 In, 802.1 lac, 802.15, 4G, LTE). The network interface 132d can be configured, for example, to transmit data to and/or receive data from the playback devices 110, the NMDs 120, other ones of the control devices 130, one of the computing devices 106 of Figure IB, devices comprising one or more other media playback systems, etc. The transmitted and/or received data can include, for example, playback device control commands, state variables, playback zone and/or zone group configurations. For instance, based on user input received at the user interface 133, the network interface 132d can transmit a playback device control command (e.g.. volume control, audio playback control, audio content selection) from the control device 130 to one or more of the playback devices 110. The network interface 132d can also transmit and/or receive configuration changes such as, for example, adding/removing one or more playback devices 110 to/from azone, adding/removing one or more zones to/from a zone group, forming a bonded or consolidated player, separating one or more playback devices from a bonded or consolidated player, among others. [0053] The user interface 133 is configured to receive user input and can facilitate control ofthe media playback system 100. The user interface 133 includes media content art 133a (e.g., album art, lyrics, videos), a playback status indicator 133b (e.g., an elapsed and/or remaining time indicator), media content information region 133c, a playback control region 133d, and a zone indicator 133e. The media content information region 133c can include a display of relevant information (e.g., title, artist, album, genre, release year) about media content currently playing and/or media content in a queue or playlist. The playback control region 133d can include selectable (e.g., via touch input and/or via a cursor or another suitable selector) icons to cause one or more playback devices in a selected playback zone or zone group to perform playback actions such as, for example, play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, etc. The playback control region 133d may also include selectable icons to modify equalization settings, playback volume, and/or other suitable playback actions. In the illustrated example, the user interface 133 comprises a display presented on a touch screen interface of a smartphone (e.g., an iPhone™ an Android phone). In some examples, however, user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.

[0054] The one or more speakers 134 (e.g., one or more transducers) can be configured to output sound to the user ofthe control device 130a. In some examples, the one or more speakers comprise individual transducers configured to correspondingly output low frequencies, midrange frequencies, and/or high frequencies. In some examples, for instance, the control device 130a is configured as a playback device (e.g., one of the playback devices 110). Similarly, in some examples the control device 130a is configured as an NMD (e.g., one of theNMDs 120), receiving voice commands and other sounds via the one or more microphones 135.

[0055] The one or more microphones 135 can comprise, for example, one or more condenser microphones, electret condenser microphones, dynamic microphones, and/or other suitable types of microphones or transducers. In some examples, two or more of the microphones 135 are arranged to capture location information of an audio source (e.g.. voice, audible sound) and/or configured to facilitate filtering of background noise. Moreover, in certain examples, the control device 130a is configured to operate as playback device and an NMD. In other examples, however, the control device 130a omits the one or more speakers 134 and/or the one or more microphones 135. For instance, the control device 130a may comprise a device (e.g., a thermostat, an loT device, a network device) comprising a portion of the electronics 132 and the user interface 133 (e.g., a touch screen) without any speakers or microphones.

III. Example Playback Devices

[0056] Figure 2A is a front isometric view of a playback device 210 configured in accordance with examples of the disclosed technology. Figure 2B is a front isometric view of the playback device 210 without a grille 216e. Figure 2C is an exploded view of the playback device 210. Referring to Figures 2A-2C together, the playback device 210 comprises a housing 216 that includes an upper portion 216a, a right or first side portion 216b, a lower portion 216c, a left or second side portion 216d, the grille 216e, and a rear portion 216f. A plurality of fasteners 216g (e.g., one or more screws, rivets, clips) attaches a frame 216h to the housing 216. A cavity 216j (Figure 2C) in the housing 216 is configured to receive the frame 216h and electronics 212. The frame 216h is configured to carry a plurality of transducers 214 (identified individually in Figure 2B as transducers 214a-f). The electronics 212 (e.g., the electronics 112 of Figure 1C) is configured to receive audio content from an audio source and send electrical signals corresponding to the audio content to the transducers 214 for playback.

[0057] The transducers 214 are configured to receive the electrical signals from the electronics 112, and further configured to convert the received electrical signals into audible sound during playback. For instance, the transducers 214a-c (e.g., tweeters) can be configured to output high frequency sound (e.g., sound waves having a frequency greater than about 2 kHz). The transducers 214d-f (e.g.. mid-woofers, woofers, midrange speakers) can be configured output sound at frequencies lower than the transducers 214a-c (e.g., sound waves having a frequency lower than about 2 kHz). In some examples, the playback device 210 includes a number of transducers different than those illustrated in Figures 2A-2C. For example, the playback device 210 can include fewer than six transducers (e.g., one, two, three). In other examples, however, the playback device 210 includes more than six transducers (e.g., nine, ten). Moreover, in some examples, all or a portion of the transducers 214 are configured to operate as a phased array to desirably adjust (e.g., narrow or widen) a radiation pattern of the transducers 214, thereby altering a user's perception of the sound emitted from the playback device 210.

[0058] In the illustrated example of Figures 2A-2C, a filter 216i is axially aligned with the transducer 214b. The filter 216i can be configured to desirably attenuate a predetermined range of frequencies that the transducer 214b outputs to improve sound quality and a perceived sound stage output collectively by the transducers 214. In some examples, however, the playback device 210 omits the filter 216i. In other examples, the playback device 210 includes one or more additional filters aligned with the transducers 214b and/or at least another of the transducers 214.

[0059] Figure 3 A is a perspective view of a playback device 310, and Figure 3B shows the device 310 with the outer body drawn transparently to illustrate the plurality of transducers 314a-j therein (collectively “transducers 314”). The transducers 314 can be similar or identical to any one of the transducers 214a-f described previously. In this example, the playback device 310 takes the form of a soundbar that is elongated along a horizontal axis A 1 and is configured to face along a primary sound axis A2 that is substantially orthogonal to the first horizontal axis Al. In other examples, the playback device 310 can assume other forms, for example having more or fewer transducers, having other form-factors, or having any other suitable modifications with respect to the example shown in Figures 3A and 3B. In various implementations, the playback device 310 can sene as a home theatre primary playback device, and may be placed in a center front position of a home theatre listening environment. In such a configuration, the playback device 310 can play back home theatre audio synchronously with playback via one or more satellite playback devices, which can be arranged about the listening environment in a suitable configuration.

[0060] The playback device 310 can include individual transducers 314a-j oriented in different directions or otherwise configured to direct sound along different sound axes. For example, the transducers 314c-g can be configured to direct sound primarily along directions parallel to the primary sound axis A2 of the playback device 310. Additionally, the playback device 310 can include left and right up-firing transducers (e.g., transducers 314b and 314h) that are configured to direct sound along axes that are angled vertically with respect to the primary sound axis A2. For example, the left up-firing transducer 314b is configured to direct sound along the axis A3, which is vertically angled with respect to the horizontal primary axis A2. In some examples, the up-firing sound axis A3 can be angled with respect to the primary sound axis A2 by between about 50 degrees and about 90 degrees, between about 60 degrees and about 80 degrees, or about 70 degrees.

[0061] The playback device 310 can optionally include one or more side-firing transducers (e.g., transducers 314a, 314b, 314i, and 314j), which can direct sound along axes that are horizontally angled with respect to the primary 7 sound axis A2. In the illustrated example, the outermost transducers 314a and 314j can be configured to direct sound primarily along the first horizontal axis Al or at least partially horizontally angled therefrom, while the side-firing transducers 314b and 314i are configured to direct sound along an axis that lies between the axes Al and A2. For example, the left side-firing transducer 314b is configured to direct sound along axis A4.

[0062] In playback devices that do not have such side-firing transducers, side-propagating audio can be achieved by use of arrays, in which the audio output by each transducer sums in manner that the combined output has a directivity and is oriented along a side-propagating axis. [0063] In operation, the playback device 310 can be utilized to play back 3D audio content that includes a vertical component (also referred to herein as a ‘’height component”). As noted previously, certain 3D audio or other immersive audio formats include one or more height channels in addition to any lateral (e g., left, right, front) channels. Examples of such 3D audio formats include DOLBY ATMOS, MPEG-H, and DTS:X formats. In playback devices that do not have such up-firing transducers, upward-propagating audio can be achieved by use of arrays, in which the audio output by each transducer sums in a manner that the combined output has a directivity' and is oriented along a vertically propagating axis.

[0064] In example implementations, various techniques described herein may be carried out with a playback device that includes multiple audio transducers, and may optionally be used as a multichannel satellite playback device for home theatre applications. By way of illustration, Figure 4A is an exploded view of a playback device 410 that includes a plurality 7 of speakers 414. In particular, the speakers 414 include a forward firing transducer 414a, a side-firing transducer 414b, a side-firing transducer 414c, an upward-firing transducer 414d, a side-firing transducer 414e, and a side-firing transducer 414f (not shown). The speakers 414 are carried in a housing 430. The playback device 410 may otherwise include components the same as or similar to the playback devices 110a (Figure 1C), 210 (Figure 2A) or 310 (Figure 3A), which may be carried by the housing 430.

[0065] As shown in the exploded view of Figure 4A, the forward-firing transducer 414a is comprised of several components, including a first component 414a-l and a second component 414a-b. In assembly, the first component 414a-l and the second component 414a-b are joined to form the forward-firing transducer 414a. In other examples, the side-firing transducer 414a may be formed from a single component. Within example implementations, the other speakers 414 as well as the other components may be formed from one or more multiple components as well.

[0066] Within examples, the speakers may have a particular arrangement relative to one another. Figure 4B is a partial view of the playback device 410 which illustrates the speakers 414 in an example arrangement. As shown, the forward firing transducer 414a is oriented in a first direction (i.e., forward). The side-firing transducer 414b and the side-firing transducer 414f are implemented as respective woofers and are oriented in second and third directions that are approximately 180° from one another and approximately 90° from the first direction in the horizontal plane.

[0067] In this example, three of the speakers 414 are implemented as tweeters. These include the side-firing transducer 414c and the side-firing transducer 414e, which are similarly oriented as the side-firing transducer 414b and the side-firing transducer 414b. The tweeters also include the upw ard-firing transducer 414d, which is oriented in a fourth direction approximately 70° from the first direction in the vertical plane. As shown, the side-firing transducer 414c, the side-firing transducer 414e, the upward-firing transducer 414d also include respective horns.

[0068] The arrangements of the transducers 414 may have particular acoustic effects. For instance, the arrangement of the side-firing transducer 414c and the side-firing transducer 414e may provide an ambient effect when surround content is output via the side-firing transducer 414c and the side-firing transducer 414e respectively. The similar arrangement of the sidefiring transducers 414b and the side-firing transducer 414f may have a similar effect. In contrast, the forward-firing transducer 414a has a relatively more direct sound (assuming that the playback device 410 is oriented such that the primary direction of forward-firing transducer 414a is more oriented toward the user(s) relative to the primary direction of output of the sidefiring transducers 414c and 414e).

[0001] To provide further illustration, Figure 4C is a view showing the playback device 410 as partially assembled. Figure 4C shows the housing 430 carrying the side-firing transducer 414b, the up-firing transducer 414d, the side-firing transducer 414e, and the side-firing transducer 414e, as well as the second component 414a-2 of the forward-firing transducer 414a. The first component 414a-l is not shown in Figure 4C in order to provide a partial interior view of the housing 430.

[0002] Figure 4D is a further view showing the playback device 410 also as partially assembled (without the exterior speaker grilles and trim). Figure 4D shows the housing 430 carrying the side-firing transducer 414b, the side-firing transducer 414c, the up-firing transducer 414d, and the side-firing transducer 414e. In this view-, the first component 414a-l of the forward-firing transducer 414a is connected to the second component 414a-2.

[0003] As illustrated in Figure 4D, the transducers 414 of the playback device 410 are arranged to output audio along a variety of sound axes. For instance, the forward-firing transducer 414a is configured to output audio along a forward (or primary) sound axis 460, and the up-firing transducer 414d is configured to output audio primarily along a vertical sound axis 470. The side-firing transducers 414b and 414c are configured to output audio primarily along first a first side sound axis 480, either individually or in combination. Additionally opposing side-firing transducers 414e and 414f (not shown in Figure 4D) are configured to output audio primarily along a second side axis 490, either individually or in combination.

[0004] As illustrated, the vertical sound axis 470 is vertically angled with respect to the forward sound axis 460. In some examples, vertical sound axis 470 can be angled with respect to the forward sound axis 460 by between about 50 degrees and about 90 degrees, between about 60 degrees and about 80 degrees, or about 70 degrees. The first side sound axis 480 and the second side sound axis 490 can each be horizontally angled with respect to the forward sound axis 460, for example by about 90 degrees from the forw ard sound axis 460, and about 180 degrees from one another. In at least some implementations, one or both of the side sound axes 480, 490 are also angled vertically with respect to the forward sound axis 460, for example by 10, 20, 30, 40 degrees or more.

[0069] In operation, the playback device 410 can be utilized to play back 3D audio content that includes a vertical component, either as a standalone device or as one component of a home theatre arrangement (e.g., with the playback device 410 serving as a home theatre primary, front surround, rear surround, or other discrete satellite playback device). As noted previously, certain 3D audio or other immersive audio formats include one or more vertical channels in addition to any lateral (e g., left, right, front) channels. Examples of such 3D audio formats include DOLBY ATMOS, MPEG-H, and DTS:X formats.

IV. Example Techniques for Home Theatre Audio Playback with Multichannel Satellite Playback Devices

[0070] Home theatre audio configurations can involve a number of discrete playback devices distributed about the listening environment. In some instances, a primary playback device (e.g., a soundbar) can be configured to be placed in a front center position of the listening environment, and one or more satellite playback devices can be placed in various positions about the listening environment. Depending on the ty pe of audio content, the number and ty pe of playback devices, and/or user preferences, satellite playback devices may be placed in front right, front left, rear left, rear right, right side, left side, or other suitable positions relative to the intended listening position. Although conventional home theatre configurations utilize single-channel satellite playback devices, employing multichannel satellite playback devices can achieve a more immersive listening experience. Employing such multichannel satellite playback devices — each of which may be capable of outputting a plurality of discrete audio channels along a plurality of sound axes — also present challenges in some contexts. A number of techniques are described herein for taking advantage of the increased performance of multichannel satellite playback devices while avoiding or overcoming the potential challenges associated with their use. a. Home Theatre Audio Data Distribution for Multichannel Satellite Playback Devices [0071] One potential drawback of using multichannel satellite playback devices for a home theatre arrangement arises due to limitations for data transfer over a wireless network. As the number of playback devices in the home theatre zone increases, and the number of channels handled by each playback device also increases, the total number of channels to be wirelessly transmitted over a network for synchronous playback may exceed the available data transmission limits. For example, consider a home theatre zone including a primary playback device (e.g., a soundbar or another playback device connected to and/or comprising a display device) and at least two multichannel rear satellite playback devices. Each multichannel rear satellite playback device may be capable of outputting three or more individual audio channels (e.g., side surround, rear surround, rear height), which would necessitate transmitting three data channels from the primary playback device to each rear satellite playback device, resulting in at least six total channels to be wirelessly transmitted. If discrete front left and right satellite playback devices and one or more subwoofers are also added, the primary playback device may need to transmit 12 or more data channels. As modem wireless environments can be quite congested with network traffic, it may not be feasible for a device (e.g., a home theatre primary playback device) to transmit all channels to the various playback devices.

[0072] Examples of the present technology can address these and other problems by intelligently downmixing incoming audio data into a smaller number of channels for transmission to multichannel satellite playback devices, which can then upmix the received audio data for synchronous playback with other playback devices within the environment. For example, rather than transmit six data channels of six full frequency audio spectrum channels, the media playback system can intelligently down mix the data into a smaller number of channels for wireless transmission to the satellite playback devices. The satellite playback devices may then receive the downmixed data, and upmix the received channels for playback. The parameters of the downmixing and/or upmixing can be based on, for instance, similarities among two or more audio channels, typical audio channel content, playback device characteristics, playback device placement, room acoustics, listener location, media content, network conditions, or any other suitable conditions.

[0073] Figure 5 is a schematic block diagram of a system 500 for distributing audio data to multichannel satellite playback devices. The system 500 illustrates a portion of an audio processing chain involving a primary’ playback device such as the primary playback device 310 of Figures 3A and 3B (e.g., a soundbar or other suitable playback device) and a satellite playback device 410 (e.g., a multichannel playback device). Although only a single satellite playback device 410 is shown, in various implementations the audio distribution scheme shown here can be extended to any number of discrete satellite playback devices. In the illustrated example, the primary’ playback device comprises a single playback device. In some examples, however, the system 500 comprises two or more primary playback devices. Moreover, in some examples, the primary’ playback device 310 comprises a display device (e.g., a television, projector, or other suitable video display device) that transmits audio directly to the satellite playback device 410. In certain examples, the primary playback device 310 lacks transducers and/or amplifiers and comprises a media device connected (e.g., via a wired connection, via a wireless connection, via a direct connection in contact with an interface such as an HDMI interface) to a display device. For instance, the primary playback device 310 may comprise a streaming stick, a set-top box, a dongle, etc.

[0074] With reference to Figure 5, the primary' playback device 310 receives and/or decodes audio data 504 which includes n audio channels. The number of audio channels can vary depending on the particular audio content, the encoding format, etc. In various examples, the audio data 504 can be received over a physical link to an audio source (e.g., eARC connection to a display device) or over a wireless link to an audio source (e.g., network connection to remote computing devices associated with a media content service).

[0075] The primary playback device 310 includes a downmixer 506, w hich can take the form of circuitry and/or software components configured to receive audio data having n channels and to output audio data having m channels, where m< n. For example, three incoming audio channels can be downmixed to two channels. In various examples, the downmixer 506 can be configured to modify the incoming n audio channels according to a downmixing scheme, which optionally can vary’ according to certain parameters. For instance, the downmixer 506 can downmix more or less aggressively (e.g., a greater or lesser reduction in the number of channels) under certain conditions. In some instances, the downmixer 506 may not downmix the incoming n audio channels at all, but instead may pass through the incoming audio data 504 without modification.

[0076] The downmixed audio data (e.g., having m audio channels) is then passed to a transmitter 508 (e.g., a network interface or other communication component(s)), which transmits the downmixed audio data to a corresponding receiver 510 (e.g., a netw ork interface or other communication component(s)) of the satellite playback device 410. The received audio data is then passed to an upmixer 512 of the satellite playback device. The upmixer 512 can take the form of circuitry and/or softw are components configured to receive audio data having m signals and to output audio data having n signals, where m<n. For example, a received two channels of downmixed audio data can be upmixed via the upmixer 512 to output three channels of audio data. The satellite playback device 410 then outputs n audio channels as shown in block 514. This can involve playing back the n audio channels via a plurality of transducers of the satellite playback device 410. In various examples, the n audio channels can be output via arraying techniques such that some or all of the channels can be output via a plurality of transducers, and a single transducer can participate in outputting more than one channel. In some instances, a single transducer can output only a single audio channel (e.g., an up-firing transducer may output only a rear height audio channel). In various examples, the dowTimixing and upmixing process can be lossy or lossless.

[0077] In various implementations, the primary playback device 310 may downmix only a subset of the total incoming audio channels. For example, for audio encoded in a 7. 1.4 format, the incoming audio can include 12 total channels: center, front left, front right, front right height, front left height, right side surround, left side surround, rear right surround, rear left surround, rear right height, rear left height, and low-frequency effects (LFE). If the system 500 includes two discrete rear satellite playback devices and a discrete subwoofer, then the primary playback device 310 may play back the center, front left, front right, front right height, and front left height channels without the need for downmixing (because these channels need not be transmitted from the primary playback device 310 to other devices). The primary playback device 310 may. however, transmit to each of the rear satellite playback devices 410 data corresponding to three channels of audio data: right side surround, right rear surround, and right rear height for a right rear satellite playback device, and left side surround, left rear surround, and left rear height for a left rear satellite playback device. For transmission of these channels, the playback device 310 may downmix each batch of audio channels for transmission to a respective satellite playback device 410. Additionally, the LFE audio data can be transmitted from the primary playback device 310 to the subwoofer, optionally without any downmixing or other compression scheme.

[0078] Figure 6 is a block diagram of an example method 600 for distributing audio data to multichannel satellite playback devices. For the method 600 and for the other methods disclosed herein, the method can be implemented by any of the devices described herein, or any other devices now known or later developed. Various examples of the methods disclosed herein include one or more operations, functions, or actions illustrated by blocks. Although the blocks are illustrated in sequential order, these blocks may also be performed in parallel, and/or in a different order than the order disclosed and described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and/or removed based upon a desired implementation. In addition, for the method 600 and for other processes and methods disclosed herein, the flowcharts show functionality and operation of possible implementations of some examples. In this regard, each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by one or more processors for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive. The computer readable medium may include non-transitory computer readable media, for example, such as tangible, non-transitory 7 computer-readable media that stores data for short periods of time like register memory, processor cache, and Random-Access Memory (RAM). The computer readable medium may also include non- transitory media, such as secondary 7 or persistent long-term storage, like read only 7 memory (ROM), optical or magnetic disks, compact disc read only memory 7 (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. The computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device. In addition, for the methods and for other processes and methods disclosed herein, each block in Figure 6 may represent circuitry 7 that is wired to perform the specific logical functions in the process.

[0079] With reference to Figure 6, the method 600 begins at block 602, which involves receiving, at a playback device, audio source data. The audio source data can be received over wired or wireless connection to an audio source, and can be encoded in any suitable format. Examples include home theatre audio formats such as Dolby home theatre formats (e.g. 5.1.2, 5.1.4, 7.1.2, 7.1.4, Atmos, etc.). In some examples, the playback device that receives the source audio data can be a home theatre primary playback device, which can take the form of a soundbar or other suitable device. In some instances, the playback device that receives the audio source data may not itself be involved in playing back home theatre audio content, but instead may coordinate playback via other playback devices in the environment.

[0080] The method 600 continues in block 604, which involves obtaining n channels from the received source audio data. The number of channels will depend on the particular audio format of the audio source data. In decision block 606, the media playback system (e.g., the primary playback device, another playback device within the environment, or other computing device associated with the media playback system) determines if network conditions are sufficient to transmit n channels of audio data to satellite playback devices. This determination can involve, for example, assessing the file size of the various audio channels, the data transfer rate, the available bandwidth, and/or other parameter(s) of the network (e.g., a local wireless area network, personal area network (PAN) such as an ad hoc Bluetooth network, etc.) to determine whether transmission of the necessary' channels is feasible. As noted above, in some implementations the audio content can have a greater number of channels than the n channels that are transmitted to satellite playback devices for playback. In some examples, the evaluation can be predetermined (e.g., only two data channels per satellite playback device), while in others the evaluation can be based on detected network conditions.

[0081] If, in decision block 606, network conditions are sufficient (e.g., the available bandwidth and data transfer speeds are sufficient for the amount of data contained in the audio channels to be transmitted to satellite playback device(s)), then the method 600 proceeds to block 610 with transmitting the audio data to satellite playback device(s) for playback. This can involve, for example, sending a first subset of the n channels to a first satellite playback device for playback, and a second subset of the n channels (which may be partially overlapping or wholly non-overlapping with the first subset) to a second satellite playback device for playback. The playback devices of the home theatre zone (which can include the primary playback device that transmitted the n audio channels) can then play back audio in synchrony. [0082] If, in decision block 606, the network conditions are not sufficient (e.g., the available bandwidth and/or data transfer speeds are insufficient for the amount of data contained in the audio channels to be transmitted to satellite playback device(s)), then the method 600 proceeds to block 608 to downmix the audio content to a smaller number of channels. For instance, if three channels are configured to be played back via a particular satellite playback device, these three channels can be downmixed to two channels for transmission. In block 610, the method 600 involve transmitting the audio data (which includes the downmixed channels) to the satellite playback device(s) for playback. As noted previously, in some examples the satellite playback devices can upmix the received audio content for playback.

[0083] The process illustrated in Figure 6 can be performed intermittently when a new audio source is detected, or may be performed continuously on incoming audio content, such that as network conditions improve or the audio data size is reduced, the media playback system may cease downmixing the audio channels for transmission. Conversely, the media playback system may reinstitute downmixing if the network conditions again become insufficient.

[0084] In some examples, the particular downmixing scheme applied to the audio channels (e.g., via downmixer 506 (Figure 5) or as applied in block 608 (Figure 6), can vary depending on one or more input parameters. For example, the primary playback device can receive or otherwise obtain one or more input parameters, and based on the parameters may van- between a first downmixing scheme and a second downmixing scheme. The various downmixing schemes can vary in more or less aggressively downmixing channels (e.g., with a greater or lesser reduction in total number of channels). In some examples, the dow nmixing schemes can vary with respect to a cutoff frequency, below which audio content from all channels are combined into a single channel.

[0085] In various implementations, the input parameter(s) can include one or more of an audio content parameter (e.g., type of audio content, number of incoming channels, etc.), a device location parameter (e.g., device location relative to listening location), a listener location parameter (e.g., a listener location relative to the device or to the environment), a playback responsibilities parameter (e g., whether a particular satellite playback device is a rear satellite playback device or a front satellite playback device), or an environmental acoustics parameter (e.g., characterizing the acoustic properties of the listening environment, such as data obtained during a spectral calibration procedure).

[0086] Figure 7 is a schematic diagram of a system 700 for distributing audio data to multichannel satellite playback devices. The system 700 can be configured for 7.1.4 home theatre audio playback (i.e. seven lateral or ear-level channels: front left, center, front right, left side surround, left rear, right rear, and right side surround; one LFE channel; and four height channels: front left height, front right height, rear left height and rear right height). As shown, the system 700 can include a primary playback device 310 (e.g., a soundbar or other suitable playback device), a subwoofer 110, and two multichannel satellite playback devices 410a and 410b. An LFE data channel can be transmitted from the primary playback device 310 to the subwoofer 110, and two data channels can be transmitted to each of the satellite playback devices 410 (e.g., data channels 1 and 2 transmitted to first satellite playback device 410a, and data channels 3 and 4 transmitted to second satellite playback device 410b). In some implementations, these data channels include downmixed audio channels, for example a downmix from three channels of audio content into two data channels.

[0087] To accommodate playback of three channels via the satellite playback devices 410 while only transmitting two data channels, the satellite audio channels can be downmixed (e.g., via the primary playback device 310). One example of such downmixing can is shown in Figure 8, which illustrates a distribution of audio channels among data channels for transmitting audio data to a multichannel satellite playback device 410. As shown, the first data channel can include full frequency content of a first audio channel (e.g., side surround content), while the second data channel includes low- and mid-frequency content of a second audio channel (e.g., rear surround content) and mid- and high-frequency content of audio channel 2 and audio channel 3 (e.g., rear height content). In some examples, this downmixing scheme can be adjusted based on relative similarities of the audio content among different audio channels, content type, network conditions, room acoustics, device placement, device orientation, or other relevant parameters.

[0088] In some examples, downmixing n channels of satellite audio data to m channels of satellite audio data according to a first downmixing scheme can include (i) mapping a first channel of the n source channels to a first channel of the m downmixed channels (ii) mapping a first portion of a second channel of the n source channels to a first portion of a second channel of the m downmixed channels, and (iii) mapping a second portion of a third channel of the n source channels to a second portion of the second channel of the m downmixed channels. As a result, three channels are downmixed into two channels. Additionally or alternatively, downmixing the n channels of satellite audio data to m channels of audio satellite data according a downmixing scheme can include mapping a second portion of the second channel of the n source channels to the second portion of the second channel of the m downmixed channels such that the second portion of the second channel of the m downmixed channels comprises a combination of (i) the second portion of the second channel of the n source channels and the second portion of the third channel of the n source channels.

[0089] In some examples, the relationship between audio channels and data channels may change based on, for instance, content type, content source, network conditions, etc. In some examples, the arrangement of audio channels within the available data channel bandw idth may be based on manual input. Referring back to Figure 7, after the satellite device receive 410a the downmixed audio (e.g., including the two data channels shown in Figure 8), the satellite playback device 410a can upmix the received audio data to obtain the individual components of audio channels 2 and 3 and output each of audio channels 1, 2, and 3 via the appropriate transducer(s). b. Playback Device Parameter Determination and Adjustment

[0090] Additional challenges can arise when using multichannel satellite playback devices in conjunction with a soundbar or other similar device configured to output multiple channels of audio content. In home theatre arrangements, a soundbar (or other suitable primary playback device) ty pically handles playback responsibilities for at least the front left, center, and front right channels (and optionally additional channels in some instances, such as left side surround, right side surround, right front height, and left front height). When front left, center, and front right channels are all output by a single playback device such as a soundbar, playback parameters such as phase and magnitudes of the audio output are expected to be inherently seamless across all channels. However, when front left and front right channels are additionally or instead output via discrete front satellite playback devices, there is a risk of mismatch of playback parameters between the devices that can deleteriously affect the user's listening experience. For instance, if the phase response and/or magnitude response of the playback devices are not well matched, there can be constructive and/or destructive interference at different frequencies, resulting in an undesirable unevenness in the combined audio output. Examples of the present technology can address these and other problems by performing a calibration process among devices within the home theatre zone to determine certain playback parameters (e.g., phase response, magnitude response) of individual playback devices within the zone. Based on these individual parameters, the parameters of one or more of the devices can be adjusted to match those of the other device(s). In some instances, a particular playback device can be selected as the reference device for a given parameter, and the other playback devices within the zone can have their playback modified (e.g., by adjusting a phase response, a magnitude response, etc.) to match that of the reference device. As a result, a more consistent output among the various playback devices can be achieved.

[0091] Figure 9 illustrates a home theatre environment including a media playback system 900 arranged about an intended listening location 902 (indicated as a couch). The media playback system 900 includes a primary playback device 310 (e.g., a soundbar or other suitable playback device), a subwoofer 110, and four discrete multichannel playback devices 410a-d, with playback devices 410a and 410b arranged as front left and front right satellite playback devices, respectively, and playback devices 410c and 410d arranged as rear left and rear right satellite playback devices, respectively. In this arrangement, playback responsibilities can be distributed among the various playback devices in a number of ways. In some examples, the primary playback device 310 can output at least a center channel, while the front left satellite playback device 410a outputs at least a front left channel and the front right satellite playback device 410b outputs at least a front right channel. Optionally, the front left and front right satellite playback devices 410a and 410b can also output at least a portion of left side surround, left front height, right side surround, and right front height channels, respectively. Similarly, rear satellite playback devices 410c and 410d can output right rear and left rear channels, respectively. Optionally, the rear satellite playback devices 410c and 41 Od can also participate in outputting left side and right side surround channels, respectively.

[0092] While the various playback devices of the media playback system 900 are configured to play back home theatre audio content in synchrony, in some instances audio output by the devices may not blend together as well as desired. Typical home theatre audio is mixed as though each channel is output via a dedicated loudspeaker that is identical (e.g.. separate and identical loudspeakers for center, front left, and front right devices). Because the arrangement of the media playback system 900 may not have this characteristic, it can be useful to calibrate playback characteristics of one or more devices within the media playback system 900 to more closely match their respective outputs (e.g., to one another, and/or collectively to a desired target output). In some instances, this can involve adjusting a characteristic phase response and/or a characteristic magnitude response of playback for one or more of the devices. In some instances, a particular one of the playback devices can be selected as a reference device for one or more characteristics (e.g., magnitude response, or phase response, or both), and the other playback devices can be calibrated to match the characteristics of the reference device. In various examples, the characteristics (e.g., phase response and/or magnitude response) can be detected for one or more devices, or may be obtained by lookup tables that include characteristic data for a given make and model of playback device. Modifications to the characteristic phase response and/or magnitude response for a given playback device can be achieved by using suitable signal processing techniques, for example subjecting the audio for a given playback device to appropriate filtering operations (e.g., using finite impulse response (FIR) filter or other suitable filter).

[0093] Figure 10 is a block diagram of a method 1000 for determining and adj usting playback device parameters. The method 1000 begins in block 1002 with performing a calibration process, and in block 1004, based on the calibration process, the determining parameter(s) of individual playback device(s). In various examples, the calibration process can involve obtaining real-world data characterizing the output of each playback device. For instance, one or more microphones (e.g., of a control device such as a smartphone, of one of the other playback devices within the environment, of the same playback device, or other microphone(s) not associated with a playback device) can be used to capture sound data while a given playback device outputs suitable audio for calibration (e.g., a chirp, sweep, or other predefined audio output). This captured sound data can then be used to obtain characteristic playback parameters (e.g., phase response and/or magnitude response). This process may be repeated for each playback device (sequentially, concurrently, or some combination thereof) until playback parameters are determined for each playback device.

[0094] Additionally or alternatively, the parameter(s) of the individual playback devices can be obtained by using pre-existing characteristic data (e.g., lab tests for a given make and model of a playback device can provide characteristic phase response and/or magnitude response). In such instances, the parameters can be determined by accessing a lookup table, querying remote computing devices storing the parameter data, or any other suitable approach. In the configuration shown in Figure 9, the parameters (e.g., phase response and/or magnitude response) for each of the primary playback device 310, the satellite playback devices 410a-d, and/or the subwoofer 110 can be obtained.

[0095] Referring back to Figure 1 , at block 1006, the method 1000 involves selecting, based on the determined parameter(s), a particular playback device to be used as a reference device. And in block 1008, the method 100 includes adjusting the other playback device(s) based on the parameter(s) of the selected reference device. In various examples, a single playback device may serve as a reference device for a plurality of characteristics (e.g.. phase response and magnitude response). In some instances, a first playback device may serve as a reference device for a first characteristic (e.g., phase response) and a second playback device may serve as a reference device for a second characteristic (e.g., magnitude response). In each case, one or more of the other playback devices within the media playback system may then be adjusted (e.g., having one or more filters applied) such that its audio output more closely matches that of the reference device.

[0096] For example, in the configuration shown in Figure 9, the primary playback device 310 may be selected as the reference device for a phase response, while the front left satellite playback device 410a may be selected as the reference device for a magnitude response. Based on these determinations, the other playback devices (e.g., satellite playback devices 410a-d and/or the subwoofer 110) can be adjusted such that their phase responses more closely mimic that of the primary playback device 310. Additionally, the other playback devices (e g., primary playback device 310, satellite playback devices 410b-d, and/or the subwoofer 110) can each be adjusted such that their magnitude responses more closely mimic that of the front right satellite playback device 410a.

[0097] Once suitable adjustments have been made, the playback devices can play back audio content in synchrony. By virtue of these characteristic adjustments, the combined audio output can be more evenly matched, reducing undesirable interference at the intended listening location.

[0098] In various examples, the reference device may be selected based on characteristics of the individual playback devices, such as audio output capabilities (e.g., selecting more highly capable playback devices as a reference device, based on dynamic range, low-frequency extension, etc.), processing capabilities (e.g., selecting a device having greater computational resources such as faster processor, greater amount of memory, etc.), power parameters (e.g., portable vs. plugged-in devices), or other suitable characteristics of the individual playback devices. The reference device(s) can also be selected at least in part based on their assigned playback responsibilities (e.g., selecting a primary playback device (or a device assigned playback responsibilities for the center channel) as a reference device for one or more parameters), based on listener location (e.g., device nearest to the listener can be the reference device), or any other suitable characteristic of the environment, listener, or playback devices.

[0099] Figure 11 illustrates another example method 1100 for determining and adjusting playback device parameters. Blocks 1102 and 1104 can be similar to blocks 1002 and 1004 described above, and involve performing a calibration procedure and, based on the calibration procedure, determining parameter(s) (e.g., phase response, magnitude response) of individual playback devices within the home theatre zone. In block 1106, the method 100 involves adjusting a first parameter of second and third playback devices based on a determined parameter of a first playback device. For example, in the configuration shown in Figure 9, a phase response of the front satellite playback devices 410a and 410b can be adjusted based on a determined phase response of the primary playback device 310.

[0100] Referring back to Figure 11, at block 1108, a second parameter of the first playback device is adjusted based on a determined second parameter of the second and third playback devices. For instance, in the arrangement shown in Figure 9, the magnitude response of the primary' playback device 310 can be adjusted based on the determined magnitude responses of the front satellite playback devices 410a and 410b.

[0101] With reference to Figure 11, optionally in block 1110, the method 1100 involves adjusting a first parameter of fourth and fifth playback devices based on the determined first parameter of the first playback device, and/or adjusting a second parameter of fourth and fifth playback devices based on the determined second parameter of the second and third playback devices. For example, with reference to Figure 9, the rear satellite playback devices 410c and 410d can each have their phase response adjusted based on the determined phase response of the primary playback device 310, and can also each have their magnitude response adjusted based on the determined magnitude response of the front satellite playback devices 410a and 410b.

[0102] In some examples, these calibration and adjustment procedure can be performed in whole or in part based on certain trigger conditions, such as adding a new device to the bonded zone (e g., the home theatre zone), removing a device from the bonded zone, receiving a request to assign different playback responsibilities within the bonded zone (e.g., moving a rear satellite playback device to a front rear satellite playback device position), determining that one or more playback devices have moved positions within the room (e.g., using acoustic localization, on-board motion sensors (e.g., accelerometer, gyroscope, etc., or other localization techniques), or other such trigger condition. In these and other instances, a new reference device for one or more playback parameters can be selected, which may be the same or a different device as was previously selected as a reference device.

[0103] In various examples, additional calibration procedures can be performed before or after the above-described calibration and parameter adjustments, such as spectral calibration to account for room-specific factors for the media playback system. Examples of suitable roomspecific calibration processes can be found in commonly owned U.S. Patent No. 9,7906,323, titled “Playback Device Calibration,” and U.S. Patent No. 9,763,018, titled “Calibration of Audio Playback Devices,” each of which is hereby incorporated by reference in its entirety. c. Spatial Width Adjustments for Home Theatre Audio Playback

[0104] In some instances, using multichannel satellite playback devices can achieve a greater perceived width of audio playback, which can increase the immersiveness of the listening experience. Because multichannel satellite playback devices are capable of outputting audio along a plurality of sound axes, the perceived width of audio playback can be modified by selectively distributing playback between the multichannel satellite playback devices and a primary' playback device (e.g., a soundbar). Additionally or alternatively, the perceived width of audio playback can be modified by selectively distributing playback between the various sound axes of the multichannel satellite playback devices. In some examples, the width of audio playback can be directly controlled by a user, or can be dynamically adjusted in response to certain detected parameters. Controlling the width of audio playback can help compensate for suboptimal placement of satellite playback devices (e.g., rear satellite playback devices too close together or too close to the listener, front satellite playback devices too closer to the primary playback device or too far apart, etc.).

[0105] Figures 12A and 12B illustrate a home theatre environment including a media playback system 1200 arranged about an intended listening location 1202 (indicated as a couch). The media playback system 1200 includes a primary playback device 310 (e.g., a soundbar or other suitable playback device), a left rear satellite playback device 410a, and a right rear satellite playback device 410b. In this arrangement, playback responsibilities can be distributed among the various playback devices in a number of ways. In some examples, the primary’ playback device 310 can output at least center, front left, and front right channels, while the left rear satellite playback device 410a can output left side surround and left rear channels, and the right rear satellite playback device 410b can output right side and right rear channels. Optionally, the primary playback device 310 can also participate in outputting left and right side surround channels (e.g., via side-firing transducers of the primary playback device 310).

[0106] In various examples, the rear satellite playback devices 410 can be multichannel playback devices configured to output audio along a number of sound axes (e.g., a forward axis that propagates in a direction generally perpendicular to a front face of the playback device 410a. one or more side-firing axes that propagate at a lateral angle with respect to the forward axis, and optionally one or more up-firing axes that propagate at a vertical angle with respect to the forward axis.

[0107] By controlling the output of the various channels played back by the multichannel satellite playback devices 410. a perceived ‘“width” of the combined audio output for a listener at the intended listening location 1202 can be modified. As noted above, controlling the perceived width of the combined audio output can help compensate for suboptimal placement of satellite playback devices. For instance, rear satellite playback devices that are placed too close together and/or too closer to the intended listening location 1202 may tend to reduce the perceived width or spaciousness of the combined audio output. Conversely, rear satellite playback devices that are placed too far apart and/or too far from the intended listening location 1202 may tend to increase the perceived width or spaciousness, which in some instances may be undesirable for the listener. In various examples, the perceived width can be modified by adjusting relative magnitudes of output of different channels of audio along different sound axes of the satellite playback devices.

[0108] For example, in the arrangement shown in Figure 12A, the primary playback device 310 outputs center, front left, and front right audio channels which can generally be directed toward the intended listening location 1202. The left rear satellite playback device 410a outputs a left side surround channel along a first sound axis, and outputs a left rear channel along a second sound axis that can be laterally angled with respect to the first sound axis. In the illustrated configuration, the first sound axis is directed in a more forward direction, while the second sound axis is directed more laterally. The right rear satellite playback device 410b can be similarly configured to output a right side surround channel along a third sound axis and a right rear surround channel along a fourth sound axis.

[0109] In some examples, the perceived width of the combined audio output can be modulated by varying the relative magnitudes of the different audio channels being played back by the rear satellite playback devices 410a and 410b. For instance, as shown in Figure 12B, the outputs of the rear satellite playback devices 410a and 410b can be modified (compared to the configuration shown in Figure 12A) such that the left rear and right rear channels are played back at greater magnitudes, and the left side surround and right side surround channels are played back at lesser magnitudes (indicated by the relative sizes of the arrows). This adjustment can reduce a perceived width of the combined audio output compared to the arrangement shown in Figure 12A, and may be appropriate when satellite playback devices 410a and 410b are spaced too far apart and/or too far from the intended listening location 1202. Conversely, to increase the perceived width, the left side surround and right side surround channels can be played back at greater magnitudes, and the left rear and right rear channels can be played back at lesser magnitudes. This may be appropriate when, for instance, the rear satellite playback devices 410a and 410b are spaced too close together and/or too close to the intended listening location 1202. In other words, by emphasizing side surround channels and de-emphasizing rear channels, the perceived width is increased, and conversely de-emphasizing side surround channels and emphasizing rear channels can reduce the perceived width.

[0110] Although Figures 12A and 12B illustrate varying relative magnitudes of the channels played back by the rear satellite playback devices 410a and 410b, in some examples a portion of the playback responsibilities can be moved from the rear satellite playback devices 410a and 410b to other playback devices. For instance, at least a portion of the side surround channels can be played back via the primary playback device 310 (or via other satellite playback devices) rather than via the rear satellite playback devices 410a and 410b, which can also affect the perceived width of the combined audio output.

[OHl] In various examples, the media playback system 1200 can transition from the configuration shown in Figure 12A to the configuration in Figure 12B (or vice versa) based on one or more trigger conditions. Among examples, the trigger conditions can include a user input (e.g., user input via a controller device, voice input, etc.), detection of an environment acoustics parameter (e.g., based on a calibration process as described previously, detecting objects in the environment), a device position parameter (e.g., localization of a device, indication of device movement, etc.), a listener location parameter, and/or any other suitable conditions.

[0112] Figure 13 illustrates an example interface 1300 for a control device 130 that enables a user to manually adjust a width parameter of audio playback. As shown, the interface 1300 can include a first slider 1302 for adjusting a left width (i.e., increasing or decreasing relative to an initial setting) and a second slider 1304 for adjusting a right width. Although separate sliders are shown for left and right widths, in some instances a single slider (or other input type) can simultaneously control the width of both left and right sides. Additionally or alternatively, separate sliders can be provided for modifying a rear width and a front width. In the case of adjusting a rear width (as described above with respect to Figures 12A and 12B), pushing the sliders 1302 and 1304 towards increased width can cause the side surround channels to be played back at greater magnitudes and the rear channels to be played back at lesser magnitudes. [0113] While various examples described herein relate to adjusting a width of audio playback, a similar approach can be also be used to vary a perceived depth of audio playback (e.g., spaciousness along a forward-backward axis, rather than a left-right axis).

[0114] In addition or alternatively to adjusting a width of audio output by modifying playback responsibilities of rear satellite playback devices, the perceived width of audio output can also be modified by adjusting playback responsibilities of front satellite playback devices. Figures 14A and 14B illustrate a home theatre environment including a media playback system 1400 arranged about an intended listening location 1402 (indicated as a couch). The media playback system 1400 includes a primary playback device 310 (e.g., a soundbar or other suitable playback device), a left front satellite playback device 410a, and a right front satellite playback device 410b. In this arrangement, playback responsibilities can be distributed among the various playback devices in a number of ways. In some examples, the primary playback device 310 can output at least a center channel, while the left front satellite playback device 410a can output a front left channel and the right front satellite playback device 410b can output a front right channel. Optionally, the primary playback device 310 can also participate in outputting front left and front right channels.

[0115] As noted above, varying playback responsibilities of the individual playback devices within the media playback system can modify a perceived width of the combined audio output, and optionally can compensate for suboptimal placement of the satellite playback devices relative to an intended listening location. With respect to front satellite playback devices 410a and 410b, the width can be increased by playing back a greater proportion of the front left and front right audio channels via the front satellite playback devices 410a and 410b, and conversely the width can be decreased by playing back a lesser proportion of the front left and front right audio channels via the front satellite playback devices 410a and 410b (e g., and playing back an increased proportion of the front left and front right audio channels via the primary playback device 310).

[0116] For instance, as shown in Figure 14A, in a first arrangement the media playback system 1400 can play back a front left channel via only the front left satellite playback device 410a. and can also play back a front right channel via only the front right satellite playback device 410b. In this example, the primary playback device 310 outputs a center audio channel and does not output front left or front right audio channels. In contrast, in the arrangement shown in Figure 14B, a width of the combined audio output is reduced (relative to the arrangement of Figure 14A) by routing at least some of the front left and front right audio channels through the primary playback device 310, and/or playing back a lesser proportion of the front left and front right channels via the front left and front right satellite playback devices 410a and 410b, respectively.

[0117] In various examples, the media playback system 1400 can transition between the arrangement shown in Figure 14A and that shown in Figure 14B (and vice versa) based on one or more trigger conditions. As noted above, the trigger indications or conditions can include a user input (e.g., user input via a controller device as shown in Figure 13, via voice input, etc.), detection of an environment acoustics parameter, a device position parameter, a listener location parameter, and/or any other suitable conditions. [0118] Figure 15 is a block diagram of a method 1500 for modifying a width parameter of audio playback. The method 1500 begins in block 1502 with receiving a trigger indication. As noted previously, the trigger indication can take any suitable form, including direct user input (manual input, voice controlled, etc.), and/or the system can automatically detect certain conditions (e.g., acoustic conditions of the environment, location or movement of device(s) or listener(s), etc.).

[0119] In block 1504, based on the trigger indication, playback parameter(s) to be adjusted to modify a perceived width of audio playback are determined. And at block 1506, the method 1500 involves adjusting parameter(s) of one or more playback devices to modify 7 the perceived width of audio playback. The playback parameter(s) can include relative magnitudes of one or more channels of audio playback, and/or varying playback responsibilities for particular devices (e.g., varying which device(s) are responsible for playing back particular audio channels). As noted above, this can involve increasing or decreasing a relative magnitude of playback of certain channels by certain playback devices (e.g.. increasing a side surround channel magnitude and decreasing a rear surround channel magnitude for a rear satellite playback device). Additionally or alternatively, this can involve modifying which playback devices participate in playing back particular channels (e.g., routing front left and front right audio channels through the primary 7 playback device in response to the trigger indication). d. Modifying Playback Parameters to Compensate for Satellite Playback Device Placement

[0120] In some cases, a user's placement of multichannel satellite playback devices around her environment may differ from the intended placement, either in terms of device location or device orientation. As a result, the audio output by the multichannel satellite playback devices can have unintended properties, such as a side surround channel being directed too far forward or too far rearward of an intended listening location. Examples of the present technology can address these and other problems by modify ing playback parameters of multichannel satellite playback devices to compensate for their placement within the environment. As a result, even when a user places multichannel audio satellite playback devices in undesirable locations or orientations, the system can adapt playback to provide an improved listening experience.

[0121] Figures 16A-1 C illustrate a home theatre environment in which a media playback system 1600 is arranged about an intended listening location 1602 (shown as a couch). The media playback system 1600 can include a primary playback device 310 (e.g.. a soundbar), a front left satellite playback device 410a, and a front right satellite playback device 410b. As illustrated, the satellite playback devices 410a and 410b can each be configured to output audio along a plurality of sound axes, which can be laterally angled with respect to one another. These can include, for example, a forward-firing axis that extends substantially perpendicular to a front face of the playback device 410, and left and right side-firing axes that are angled laterally on opposing sides of the forward-firing axis. Due to the arrangement of individual transducers within the satellite playback device 410, audio output can be directed along one or more sound axes to achieve a desired acoustic effect. In some instances, a single audio channel can be mapped to a particular sound axis, while in other instances a single audio channel can be output via two or more sound axes, and moreover two or more audio channels can be output via the same sound axis.

[0122] For example, in the arrangement shown in Figure 16A, the front left satellite playback device 410a outputs a front left channel along its forward -Pi ring axis, and outputs a left side surround channel via its right side-firing axis, while the front right satellite playback device 410b outputs a front right channel along its forward-firing axis, and outputs a right side surround channel via its left side-firing axis. In the illustrated example, this configuration results in the front right and front left audio channels propagating essentially directly toward the listening location 1602, while the left side surround and right side surround channels are directed to locations laterally spaced away from the listening location 1602. While this playback configuration may achieve the desired results in terms of spaciousness and/or immersiveness, the acoustic performance can depend significantly on the particular location and/or orientation of the satellite playback devices 410. For instance, if these devices 410a are oriented differently (e.g., rotationally adjusted either forward or backward relative to the arrangement shown in Figure 16A), the audio output channels may propagate along undesirable directions.

[0123] To compensate for the variations in placement and orientation by users in the environment, the media playback system 1600 can obtain an indication of device orientation(s) for the satellite playback devices 410a and/or 410b, and can modify playback responsibilities for these devices accordingly. In various examples, obtaining an indication of device orientations can involve sensing an orientation of the playback devices 410 via on-board sensors (e.g., accelerometers, gyroscopes, magnetometers, etc.), by analyzing acoustic output of the devices 410, using other sensors not associated with the playback devices 410, or any other suitable technique. In various examples, the orientation of the playback device can be determined by the playback device itself, or can be determined via other devices and the indication can be transmited to the playback device or other device of the media playback system. The orientation can include, for example, an angular orientation of the device (e.g., rotation about a vertical axis extending through the playback device 410) relative to the environment, relative to a listener, and/or relative to other playback devices. In some examples, the orientation can also include position (e g., absolute position, distance between the playback device 410 and the listening location, other devices within the environment, a height of the playback device 410 relative to the environment, etc.).

[0124] In some implementations, based on obtaining orientation information for one or more playback devices 410, the playback responsibilities for those device(s) can be modified. For instance, in the arrangement shown in Figure 16B, the satellite playback devices 410aand 410b are rotated rearward (relative to the orientation shown in Figure I6A) such that the forwardfiring axes of these devices are no longer aimed toward the listening location 1602. To compensate for this position, playback responsibilities can be modified such that, as show n in Figure 16B, the front left channel is output along a left side-firing axis of the left front satellite playback device 410a, and the front right channel is output along a right side-firing axis of the right front satellite playback device 410b. Although the front right and front left channels are output via side-firing axes, because the angular orientations of the playback devices 410 has shifted relative to that shown in Figure 16A, the front right and front left channels are nonetheless output in a more advantageous direction (e.g., generally aimed directly toward the listening location 1602). As also shown in Figure 16B, the left side surround channel can be output via the forw ard-firing axis of the front left satellite playback device 410a, and the right side surround channel can be output via the forward-firing axis of the front right satellite playback device 410b.

[0125] Figure 16C illustrates another arrangement of the media playback system 1600. In this configuration, the front satellite playback devices 410 are rotated forward (relative to the orientation shown in Figure 16A), such that such that the forward-firing axes of these devices are no longer aimed toward the listening location 1602. To compensate for this position, playback responsibilities can be modified such that, as shown in Figure 16C, the front left channel is output along a right side-firing axis of the left front satellite playback device 410a, and the front right channel is output along a left side-firing axis of the right front satellite playback device 410b. Although the front right and front left channels are output via side-firing axes, because the angular orientations of the playback devices 410 has shifted relative to that shown in Figure 16A, the front right and front left channels are nonetheless output in a more advantageous direction (e g., generally aimed directly toward the listening location 1602). Accordingly, because the satellite playback devices 410 can output audio along a plurality of sound axes, playback responsibilities can be dynamically modified to direct particular audio channels along intended directions relative to a listener, even when devices are placed in suboptimal or unexpected configurations, positions, or orientations.

[0126] Although Figures 16A-16C illustrate each audio channel being output only along one sound axis or another, in various examples similar adjustments can be made in which relative proportions of the audio channel output along various axes are modified. For instance, in a first configuration, a front left channel can be output primarily along a forward-firing axis and secondarily along a side-firing axis, and in a second configuration the front left channel can be output primarily along the side-firing axis and secondarily along the forward-firing axis. That is, the magnitude of output of the front left channel along the forward-firing axis can be reduced, and the magnitude of output of the front left channel via the side-firing axis can be increased. This approach can be extended to any number of audio channels, sound axes, and/or satellite playback devices.

[0127] While the examples shown in Figures 16A-16C relate to front satellite playback devices, the same approach can be taken to other satellite playback devices, whether they are positioned as side satellite playback devices, rear satellite playback devices, or otherwise. Moreover, while the illustrated examples show different rotational orientations at constant positions, in various implementations the playback devices can have different locations and/or rotational orientations with respect to the listening environment, and the playback responsibilities for the particular devices (and the particular drivers within those devices) can be varied to achieve the desired psychoacoustic effects given the particular placement and/or orientation of the playback device(s).

[0128] Figure 17 is a block diagram of a method 1700 for modifying playback parameters to compensate for satellite playback device placement. The method 1700 begins at block 1702 with receiving multichannel audio content including a first audio channel. The multichannel audio content can be received at, for example, a discrete satellite playback device in a home theatre zone or other suitable playback device.

[0129] At block 1704, the method 1700 involves playing back a first proportion of the first audio channel via a forward-firing axis of the playback device. The method 1700 continues in block 1706 with obtaining an indication of playback device orientation. As noted previously, this can involve sensing an orientation of the playback device via on-board sensors, by analyzing acoustic output of the device, using other sensors not associated with the playback device, or any other suitable technique. In various examples, the orientation of the playback device can be determined by the playback device itself, or can be determined via other devices and the indication can be transmitted to the playback device or other device of the media playback system.

[0130] In some examples, the orientation indication can reflect the degree to which the forward-firing axis is oriented with (e.g., aligned with, aimed toward, etc.) the intended listening location. For instance, if the playback device is rotated such that a side-firing axis is oriented nearer to the intended listening location than the forward-firing axis, then playback responsibilities can be modified to compensate.

[0131] At block 1708, audio playback via the playback device is modified such that at least a second proportion of the first audio channel is played back via a side-firing axis of the playback device rather than the forward-firing axis. For example, in the case of a front right satellite playback device, a front right audio channel may first be output along a forward-firing axis, but then, based on the indication of playback device orientation, the front right audio channel may instead be output at least in part along a side-firing axis. This can compensate for a suboptimal orientation of the playback device relative to the intended listening location.

[0132] In some examples, when the playback device plays back the first proportion of the first audio channel via the forward-firing axis, the playback device plays back none of the first audio channel via the side-firing axis. Additionally or alternatively, when the playback device plays back at least the second proportion of the first audio channel via the side-firing axis, the playback device plays back none of the first audio channel via the forward-firing axis.

[0133] In some implementations, while the playback device plays back at least the first proportion of the first audio channel via the forward-firing axis, the playback device can also play back a third proportion of the first audio channel via the side-firing axis (e g., with the first audio channel being played back primarily but not exclusively by the forward-firing axis). Then, while the playback device plays back at least the second portion of the first audio channel via the side-firing axis (based on the indication of playback device orientation), the playback device also plays back a fourth proportion of the first audio channel via the forward-firing axis, wherein the first proportion is greater than the fourth proportion, and the second proportion is greater than the third proportion. In other words, following the indication of playback device orientation, the first audio channel can be output to a smaller degree along the forward-firing axis, and to a greater degree along the side-firing axis. IV. Example Methods for Improving Directivity in Audio Output

[0005] As noted prevusly. in some instances audio playback devices can be configured to play back multichannel audio content that includes vertical content (e.g., audio content configured to be played back via ceiling-mounted or up-firing transducers so that a listener localizes the sound as originating from overhead). Examples of vertical content include height channels such as front right height, front left height, rear right height, rear left height, etc. While such vertical content can be played back at least in part via up-firing transducers that are configured to direct audio output in an upward direction, there may nonetheless be forward “leakage"’ in which a portion of this vertical content propagates horizontally (e.g., in the forward direction). This forward leakage reduces the directivity of the vertical content output via the playback device, such that the vertical content may no longer be perceived as originating from a position overhead, thereby reducing immersiveness of the audio and diminishing the listening experience.

[0006] Examples of the disclosed technology may address these and other shortcomings by outputting a “null signal” that is configured to at least partially cancel out the undesirable leakage of vertical content along the forward (or other lateral) direction. For instance, as described in more detail below, by outputting vertical channel content via an up-firing transducer (or array) concurrently with outputting a null signal via a forward-firing transducer (or array) that destructively interferes with the vertical content along the forward sound axis, the amount of vertical content that reaches a listener along the forward sound axis can be reduced.

[0007] Figure 18 schematically illustrates exemplar}' playback of multichannel vertical audio content via the playback device 410 towards a user 1700. In various examples, the playback device 410 can be used as a satellite playback device for a home theatre arrangement, for instance being assigned playback responsibilities as a rear right surround, rear left surround, front right surround, front left surround, or any other suitable configuration. In many such instances, an identical playback device can be placed in a mirrored configuration (e.g., with playback device 410 as a rear left surround and a second identical playback device serving as a rear right surround). In such cases, the principles discussed here may be similarly applied, mutatis mutandis, to the corresponding second playback device in the mirrored position.

[0008] As shown in Figure 18, the playback device 410 may output forward-propagating audio 1802, which is oriented along the forward sound axis 460 (Figure 4D). This audio can be exclusively or primarily output by the transducer 414a, and in operation the playback device 410 can be positioned about a room such that this forward-propagating audio 1802 is directed substantially directly toward the user 1800 (or an intended listening position within a room or environment). The up-firing transducer 414d can output upward-propagating audio 1804a, which is directed towards an acoustically reflective surface (e.g., a ceiling) and reflected towards the user 1800 as audio 1804b. Because of the reflected path of audio 1804b, the user 1800 may perceive the audio 1804b as originating from overhead, which is desirable for vertical audio content. The upward-propagating audio 1804a can be directed primarily along the vertical sound axis 470 (Figure 4D), which can be vertically angled with respect to the forward sound axis (e.g., by between about 180 degrees and about 90 degrees, between about 60 degrees and about 80 degrees, or about 70 degrees).

[0009] With continued reference to Figure 18, the playback device 410 can also output left side-propagating audio 1806 which is directed along the left side sound axis 480 (Figure 4D). As noted above, this left side sound axis can be horizontally angled with respect to the forward sound axis (aligned with the direction of forward-propagating audio 1802), for example by about 90 degrees. In the illustrated example, this side-propagating audio 1806 can be output via an array including two transducers: side-firing transducer 414e and side-firing transducer 414f (not shown in Figure 18; best seen in Figure 4C). In the illustrated example, the first sidefiring transducer 414e can be a tweeter while the second side-firing transducer 414f can be a woofer, though other configurations are possible. In various instances, any particular transducer can be substituted by an array of suitable transducers, and conversely an array of transducers can be substituted by a single appropriately configured transducer. A similar arrangement of side-firing transducers 414b and 414c can cooperate to output right sidepropagating audio 1808 which is directed along the right side sound axis 490 (Figure 4D). This right sound axis can be horizontally angled with respect to the forward sound axis (aligned with the direction of forward-propagating audio 1802), for example by about 90 degrees, and by about 180 degrees from the left sound axis.

[0010] In operation, multichannel audio content played back via the playback device 410 can be output via one or more of the illustrated directions depending on the particular content and configuration of the playback device. In some examples, the playback device 410 can be configured as a rear satellite playback device (e.g., rear left satellite device), in which left rear surround audio content is played back via the first transducer 414a and is directed as forwardpropagating audio 1802. Meanwhile, vertical audio content (e.g., left height channel) is played back via the up-firing transducer 414d and is directed as upward-propagating audio 1804a to be reflected towards the user 1800 as reflected audio 1804b. In some implementations, vertical audio content (e.g., left height channel) is played back via an array that includes the up-firing transducer 414d (output as upward-propagating audio 1804a) in conjunction with side-firing transducers 414e, 414f, and 414b such that at least a portion of the vertical content is also output via these side-firing transducers. The addition of these side-firing transducers can enhance the total output of vertical content (e.g., by utilizing additional transducers, and particularly those with more bass capability such as transducers 414f and 414b). However, whether using only a single up-firing transducer 414d for the output of vertical content, or when using an array of transducers (including the up-firing transducer 414d) for the output of vertical content, a portion of the vertical content may “leak'’ along the forward sound axis 460 (Figure 4D; aligned with the direction of forward-propagating audio 1802).

[0011] To reduce this forw ard leakage of vertical content, the audio content can be modified to introduce a null signal to be output via at least the forward-firing transducer 414a as forwardpropagating audio 1802. This null signal can be configured to offset, cancel, or otherwise reduce the perception of vertical content propagating along the forward sound axis. For instance, the null signal output via the forward-firing transducer 414a can be configured to destructively interfere with vertical content played back via the other transducers 414, at least along the forward sound axis. In this configuration, the forward-firing transducer 414a can output the null signal in combination with its other playback responsibilities (e.g., playback of left rear surround channel content).

[0012] In some examples, the null signal can be generated by phase-shifting the vertical content signal and synchronizing the output such that the null signal destructively interferes with the vertical content output along the forward sound axis. In various examples, the null signal can be output via one or more horizontally oriented transducers, which can include forward-firing, side-firing, or other suitable transducers or combinations thereof.

[0013] As a result of canceling out at least a portion of the vertical content that propagates along the forward axis, the user 1800 is more likely to localize the vertical content as originating from the point on the ceiling from which the up-firing output has reflected (e.g., along the path of reflected audio 1804b). In some instances, the user’s localization can be based on perceiving vertical content both from the ceiling reflections and from forward leakage (e.g., propagating directly horizontally as forward-propagating audio 1802). While use of the null signal can reduce the magnitude of forward leakage, when some forward leakage remains the user 1800 may localize vertical content as originating from a point somewhere between (1) the vertical reflection point on the ceiling, and (2) the position of the up-firing transducer 414d (or other transducers of the playback device 410). In some implementations, it may be desirable to retain at least some forward leakage, such that by controlling the configuration and magnitude of the null signal, the perceived origination point can be selected to achieve the desired psychoacoustic effects (e.g., localizing vertical content at desired positions relative to an intended listening location). This may be the case when, for instance, the listening environment has especially high ceilings, unusual reflective properties, or any other conditions that lead to the acoustic reflection point being misaligned with the desired perceived origination point for the vertical content.

[0014] Figures 19A and 19B are graphs illustrating audio output contributions from various transducers in accordance with examples of the disclosed technology. In particular, Figure 19A illustrates audio output via the various individual transducers of the playback device 410 during playback of left vertical channel content. In Figure 19A, the various transducers participating in playback include an up-firing tweeter (e.g., up-firing transducer 414d shown in Figures 4A- 4D and Figure 18). a left side-firing tweeter (e.g., side-firing transducer 414e), a left side-firing woofer (e.g., transducer 414f), and a right side-firing woofer (e.g., transducer 414b). In Figure 19B, a similar plot is shown, now with the addition of a null signal output via a forward-firing tweeter (e.g., forward-firing transducer 414a). In operation, as noted previously, this null signal can serve to at least partially cancel (e.g., destructively interfere with) vertical content propagating along a forward sound axis from the playback device.

[0015] Because forward leakage can be more pronounced and/or more perceptible in certain frequency ranges, the null signal may be restricted to a particular frequency range. In the illustrated example, the null signal extends between approximately 500 Hz to approximately 2.5 kHz, though other frequency bands are possible. In general, the null signal can be restricted to a particular frequency range, for instance having a predetermined lower threshold frequency, upper threshold frequency, and bandwidth. In various examples, the null signal can have a bandwidth that is less than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz. In some examples, the null signal can have a bandwidth that is greater than about 0.5 kHz, 1.0 kHz, 1.5 kHz, 2.0 kHz, 2.5 kHz, 3.0 kHz, 3.5 kHz, 4.0 kHz, 4.5 kHz, or 5.0 kHz. The lower frequency threshold may vary in different implementations, for example being equal to or greater than about greater than about 200 Hz, 300 Hz, 400 Hz. 500 Hz, 600 Hz, 700 Hz. 800 Hz, 900 Hz, or 1 kHz. Similarly, the upper frequency threshold may van’ in different implementations, for example being equal to or lesser than about 1.0 kHz, 1.5 kHz, 2.0 kHz, 2.5 kHz, 3.0 kHz, 3.5 kHz, or 4.0 kHz, for instance between about 500 Hz to about 2.5 kHz, or any suitable frequency range for a given application and configuration of playback devices.

[0016] Figures 20A and 20B are graphs illustrating directivity of left height channel audio output as measured along different sound axes. In particular, sound pressure level (SPL) values measured during left height channel audio output is shown, with SPL values normalized to measurements taken along the fully vertical position (the 0 degree vertical position). The vertical sound axis line reflects SPL values measured along the vertical sound axis (e.g., vertical sound axis 470 shown in Figure 4D), and the forward sound axis reflects SPL values measured along the forward sound axis (e g., forward sound axis 460 shown in Figure 4D). In Figure 20 A, the audio is output according to the configuration of transducers shown in Figure 19A, while in Figure 20B the audio is output according to the configuration shown in Figure 19B (i.e., with the addition of the null signal being output via a forward-firing tw eeter).

[0017] As seen in Figure 20A, the SPL of the audio output along the forward axis has a peak (indicated by box 700) between approximately 500 Hz and 1.5 kHz, indicating a significant amount of forward leakage of this left height channel audio content along the forward sound axis. As depicted in Figure 20B, w hen the null signal is added to the combined output this peak is significantly reduced, and in fact the SPL along the forw ard sound axis is lower than the SPL along the vertical sound axis, particularly for frequencies above 1 kHz.

[0018] Accordingly, as a result of the null signal, the SPL values of the vertical content that propagates along the forward axis can be significantly lower than the SPL of the vertical content that propagates along the vertical axis when measured at similar distances from the playback device (e.g., measured at 1 foot, 3 feet, 5 feet, etc. from the playback device along their respective sound axes). In some examples, the SPL of the vertical content propagating along the forward axis can be at least 1 dB, 2 dB, 3 dB, 4 dB, 5 dB, 6 dB, 7 dB, 8 dB, 9 dB, 10 dB, 15 dB, or 20 dB less than the SPL of the vertical content that propagates along the up-firing axis measured at similar distances from the playback device and at a particular reference frequency (e.g., 500 Hz, 1 kHz, 1.5 kHz, 2.0 kHz. etc.).

[0019] To ensure that the null signal played back via the forward-firing transducer is aligned with the vertical content played back via the up-firing transducer (and optionally via one or more side-firing transducers), the null signal can be time delayed (or advanced) with respect to the output of the vertical content via the up-firing transducer or array. This time shift can be configured to compensate for the different path length that the null signal takes to reach the listener (e.g., propagating from the forward-firing transducer) as compared to the vertical content (e.g. propagating from the up-firing transducer or array).

[0020] Figures 21 and 22 are flow diagrams of methods 2100, 2200 for playing back audio in accordance with examples of the disclosed technology 7 . The processes described herein can be implemented by any of the devices described herein, or any other devices now known or later developed. Various embodiments of the methods described herein include one or more operations, functions, or actions illustrated by blocks. Although the blocks are illustrated in sequential order, these blocks may also be performed in parallel, and/or in a different order than the order disclosed and described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and/or removed based upon a desired implementation.

[0021] In addition, for the methods described below, and for other processes and methods disclosed herein, the flowcharts show functionality 7 and operation of possible implementations of some embodiments. In this regard, each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by one or more processors for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive. The computer readable medium may include non- transitory computer readable media, for example, such as tangible, non-transitory computer- readable media that stores data for short periods of time like register memory, processor cache, and Random- Access Memory (RAM). The computer readable medium may also include non- transitory media, such as secondary' or persistent long-term storage, like read only memory 7 (ROM), optical or magnetic disks, compact disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. The computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device. In addition, for the methods and for other processes and methods disclosed herein, each block in Figures 21 and 22 may represent circuitry that is wired to perform the specific logical functions in the process.

[0022] Figure 21 illustrates an example method 2100 for playing back audio content including a null signal. The method 2100 begins in block 2102 with receiving, at a playback device, audio input including a vertical content signal. The audio input can include any suitable audio format that includes a vertical component, such as DOLBY ATMOS, MPEG-H, DTS:X, or any other suitable 3D or other immersive audio format. The audio input can be full-channel audio input (e.g., containing all audio to be played back by the media playback system including the playback device) or can only be a subset of the multichannel audio content (e.g., containing only select channels to be played back by the particular playback device, concurrently with additional playback devices within the media playback system).

[0023] In block 2104, the method 2100 involves determining array components for audio playback. For instance, based on the received audio input, suitable output arrays can be configured such that, for a given channel of audio content, one or more transducers participate in its playback. Accordingly, any given transducer may participate in multiple different arrays simultaneously, and the output of audio via that particular transducer is a superposition of the various arrays in which it participates. For instance, in the case of a rear satellite playback device playing back a left height channel, audio may be output via each of an up-firing transducer, a left side-firing tweeter, a left side-firing woofer, and a right side-firing woofer (as in the example described above with respect to Figures 19A and 19B). Any one of these transducers that participates in playback of the left height channel array may also participate in other arrays to output other channels (e.g.. a left rear surround channel).

[0024] In block 2106, the method 2100 optionally includes determining a forwardpropagating energy' from playback of vertical content that is greater than a threshold amount. For instance, given particular audio content to be played back via a left height channel, a model of audio output can be used to predict an amount of forward-propagating energy that would result from playback of that audio via the particular array of transducers identified in block 2104. Additionally or alternatively, a microphone (e g., carried by a control device, another playback device in the room, a network microphone device, or any other suitable device) positioned appropriately within the environment may capture sound data indicative of the amount of forward-propagating energy during playback of the vertical content.

[0025] The amount of forward-propagating energy may be influenced by the volume of audio playback, the frequency characteristics of the audio (e.g., amount of low-frequency content, mid-frequency content, and high-frequency content), or other aspects of the audio input. If the predicted amount of forward- propagating energy exceeds a predetermined threshold (e.g., indicating that forward-leakage of vertical content exceeds a predetermined acceptable level), this may indicate the need for a compensatory' null signal to be output via the playback device to increase directivity' of playback of the vertical content.

[0026] The method 2100 can also optionally include, in block 2108, determining a center frequency (or frequencies) corresponding to the undesirable forward-propagating energy. This center frequency can be extracted from the predicted or measured forward-propagating energy in block 2106. The center frequency can reflect a frequency of highest forward-propagating energy, or a weighted average of forward-propagating energy over a given frequency range to identify a center frequency.

[0027] In block 2110, the null signal can be generated. Optionally the null signal can be generated in response to a determination in block 2106 that the forward-propagating energy exceeds a predetermined threshold. Alternatively, the null signal can be generated in every case, regardless of any measured or predicted amount of forward leakage. Additionally, the null signal can be generated based at least in part on the determined center frequency in block 808, for instance with the null signal having a frequency response curve that is centered on the center frequency. Alternatively, the null signal can be generated without reference to any determination of a center frequency of the forward-propagating energy.

[0028] As noted previously, in some instances the null signal can be generated by phaseshifting the vertical content (e.g., the left height channel) by 180 degrees such that when the null signal is played back via one or more first transducers (e.g., along a forward sound axis of the playback device), the null signal destructively interferes with the vertical content being played back by one or more second transducers (e.g., an up-firing transducer or array playing back audio along a vertical axis).

[0029] In block 2112, the method 2100 involves playing back the null signal. In the case of a null signal configured to reduce horizontal leakage of vertical content, the null signal may be played back by one or more transducers oriented horizontally. The null signal will interfere most strongly along the axis of its output, accordingly the null signal may be played back exclusively or primarily along the axis of unintended leakage, such as the forward sound axis or another horizontal sound axis of the playback device. In operation, playing back the null signal along the forward sound axis while the vertical content is played back along the vertical sound axis will reduce the amount of forward leakage of the vertical content along the forward sound axis. The net result is improved directivity of the vertical content and enhanced immersiveness for the listener.

[0030] Figure 22 illustrates another example method 2200 for playing back audio content including a null signal. The method 2200 begins in block 2202 with receiving, at a playback device, audio input including a vertical content signal. The audio input can include any suitable audio format that includes a vertical component, such as DOLBY ATMOS, MPEG-H, DTS:X, or any other suitable 3D or other immersive audio format. [0031] In block 2204, the method 2200 involves playing back audio based on the vertical content signal via at least an up-firing transducer (or array of transducers that includes at least one up-firing transducer) and a side-firing transducer (or array of transducers that includes at least one side-firing transducer). Optionally, the vertical content is played back only via the up- firing transducer or array, and horizontally oriented transducers do not contribute to the playback of the vertical content.

[0032] The method 2200 continues in block 2206 with playing back a null signal via a forward-firing transducer (or array of transducers including a forward-firing transducer) such that the null signal cancels out, along a forward sound axis, at least a portion of the vertical content being played back via the up-firing transducer (or array) and the side-firing transducer (or array). This null signal can be played back concurrently with playback of the vertical content via the up-firing transducer or array. As described previously, this null signal can destructively interfere with the vertical content primarily along the forward axis (or other horizontal axis) without significantly affecting the output along the vertical sound axis. In some implementations, to achieve destructive interference, output of the null signal can be time- delayed (or advanced) relative to the vertical content to account for the spatial separation between the transducer(s) outputting the null signal and the transducer(s) outputting the vertical content.

[0033] Although several examples described herein relate to the use of a null signal directed along a forward axis to offset forward leakage of vertical content, various other implementations are also contemplated. For example, a null signal can be output via one or more side-firing transducers (with or without concurrent output via a forward-firing transducer) concurrently with output of vertical content via an up-firing transducer to prevent lateral horizontal "leakage ’. Similarly, a null signal can be used to improve directivity within the horizontal plane. For example, while a forward-firing transducer outputs first content (e.g., left surround), a null signal can be output via one or more side-firing transducers or arrays (on one or both lateral sides of the playback device) to reduce horizontal “leakage"’ of the forwardfiring transducer output and increase its directivity. In various examples, a null signal can be output via any transducer or combination of transducers along a sound axis in a manner that destructively interferes with an output signal of other transducer(s) along that sound axis.

[0034] According to some implementations, a given playback device can be configured to output a first null signal via a first transducer (or array) while in a first playback mode, and to output a second null signal via a second transducer (or array) while in a second playback mode. For example, while in a first playback mode as a satellite playback device for a home theatre arrangement, the playback device can output a vertical-content null signal via a forward-firing transducer (thereby reducing vertical content along the forward sound axes). If the playback device is then transitioned from the first mode to a second mode (e.g., by being removed from the home theatre group and instead being bonded into a stereo pair with another playback device), the playback device may instead play back the null signal via a different transducer or array of transducers. In one example, if the playback device is arranged in a stereo pair, the vertical-content null signal may instead be played back via side-firing transducers (on one or both sides) rather than via the forward-firing transducer. In still other examples, different null signals can be used in different configurations, for instance in some modes the null signal can be configured to reduce forward leakage of vertical content, while in another mode the null signal can be configured to reduce lateral leakage of side-directed or forward-directed content.

VI. Conclusion

[0134] The above discussions relating to playback devices, controller devices, playback zone configurations, and media content sources provide only some examples of operating environments within which functions and methods described below may be implemented. Other operating environments and/or configurations of media playback systems, playback devices, and network devices not explicitly described herein may also be applicable and suitable for implementation of the functions and methods.

[0135] The descnption above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software examples or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only ways) to implement such systems, methods, apparatus, and/or articles of manufacture.

[0136] Additionally, references herein to ‘’example’’ means that a particular feature, structure, or characteristic described in connection with the example can be included in at least one example example of an invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same example, nor are separate or alternative examples mutually exclusive of other examples. As such, the examples described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other examples.

[0137] The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain examples of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring examples of the examples. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the foregoing description of examples.

[0138] When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.

VI. Examples

[0139] The disclosed technology is illustrated, for example, according to various examples described below. Vanous examples of examples of the disclosed technology are described as numbered examples for convenience. These are provided as examples and do not limit the disclosed technology 7 . It is noted that any of the dependent examples may be combined in any combination, and placed into a respective independent example. The other examples can be presented in a similar manner.

[0140] Example 1 : A media playback system comprising: a primary 7 playback device; a satellite playback device; one or more processors; a network interface; and data storage having instructions stored thereon that are executable by the one or more processors to cause the media playback system to perform operations comprising: receiving, at the primary playback device, source audio data; obtaining n channels of satellite audio data from the source audio data to be played back via the satellite playback device; downmixing the n source channels of satellite audio data to m downmixcd channels of audio satellite data according to a first downmixing scheme, wherein m<n. and wherein the first downmixing scheme is based at least in part on one or more first input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback.

[0141] Example 2. The media playback system of any one of the Examples herein, wherein the operations further comprise: after wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary playback device, one or more second input parameters different from the one or more first input parameters; receiving, at the primary playback device, second source audio data; downmixing the n channels of satellite audio data to m channels of audio satellite data according to a second downmixing scheme different from the first, wherein the second downmixing scheme is based at least in part on one or more second input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback.

[0142] Example 3. The media playback system of any one of the Examples herein, wherein the operations further comprise: after wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary playback device, second source audio data; determining that network conditions are sufficient to transmit n channels of satellite audio data of the second source audio data to the satellite playback device; and wirelessly transmitting the n channels of satellite audio data of the second source audio data to the satellite playback device.

[0143] Example 4. The media playback system of any one of the Examples herein, wherein the one or more input parameters comprise one or more of: an audio content parameter, a device location parameter, a listener location parameter, a playback responsibilities parameter, or an environmental acoustics parameter.

[0144] Example 5. The media playback system of any one of the Examples herein, wherein the operations further comprise: at the satellite playback device, upmixing the m channels of audio satellite data to generate n channels of audio satellite data; and playing back, via the satellite playback device, the upmixed n channels of audio data.

[0145] Example 6. The media playback system of any one of the Examples herein, wherein playing back the upmixed n channels of audio data comprises arraying the n channels to be output via a plurality of transducers of the satellite playback device such that each of the plurality of transducers outputs at least a portion of each of the n channels.

[0146] Example 7. The media playback system of any one of the Examples herein, further comprising playing back audio via the primary’ playback device in synchrony with playback of the upmixed n channels of audio data via the satellite playback device. [0147] Example 8. The media playback system of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises combining audio content from each of the n channels below a threshold frequency into a single one of the m down mixed channels.

[0148] Example 9. The media playback system of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises (i) mapping a first channel of the n source channels to a first channel of the m downmixed channels (ii) mapping a first portion of a second channel of the n source channels to a first portion of a second channel of the m dow nmixed channels, and (iii) mapping a second portion of a third channel of the n source channels to a second portion of the second channel of the m downmixed channels.

[0149] Example 10. The media playback system of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises mapping a second portion of the second channel of the n source channels to the second portion of the second channel of the m downmixed channels such that the second portion of the second channel of the m downmixed channels comprises a combination of (i) the second portion of the second channel of the n source channels and the second portion of the third channel of the n source channels.

[0150] Example 11. The media playback system of any one of the Examples herein, wherein the operations further comprise determining that wireless network conditions are insufficient to transmit the n source channels of satellite audio data to the satellite playback device.

[0151] Example 12. A method comprising: receiving, at a primary playback device, source audio data; obtaining n channels of satellite audio data from the source audio data to be played back via a satellite playback device; downmixing the n source channels of satellite audio data to m downmixed channels of audio satellite data according to a first downmixing scheme, wherein m<n, and wherein the first downmixing scheme is based at least in part on one or more first input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback.

[0152] Example 13. The method of any one of the Examples herein, further comprising: after wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary playback device, one or more second input parameters different from the one or more first input parameters; receiving, at the primary playback device, second source audio data; downmixing the n channels of satellite audio data to m channels of audio satellite data according to a second downmixing scheme different from the first, wherein the second downmixing scheme is based at least in part on one or more second input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback.

[0153] Example 14. The method of any one of the Examples herein, further comprising: after wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary playback device, second source audio data; determining that network conditions are sufficient to transmit n channels of satellite audio data of the second source audio data to the satellite playback device; and wirelessly transmitting the n channels of satellite audio data of the second source audio data to the satellite playback device.

[0154] Example 15. The method of any one of the Examples herein, wherein the one or more input parameters comprise one or more of: an audio content parameter, a device location parameter, a listener location parameter, a playback responsibilities parameter, or an environmental acoustics parameter.

[0155] Example 16. The method of any one of the Examples herein, further comprising: at the satellite playback device, upmixing the m channels of audio satellite data to generate n channels of audio satellite data; and playing back, via the satellite playback device, the upmixed n channels of audio data.

[0156] Example 17. The method of any one of the Examples herein, wherein playing back the upmixed n channels of audio data comprises arraying the n channels to be output via a plurality of transducers of the satellite playback device such that each of the plurality of transducers outputs at least a portion of each of the n channels.

[0157] Example 18. The method of any one of the Examples herein, further comprising playing back audio via the primary playback device in synchrony with playback of the upmixed n channels of audio data via the satellite playback device.

[0158] Example 19. The method of any one of the Examples herein, wherein dow nmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises combining audio content from each of the n channels below a threshold frequency into a single one of the m downmixed channels.

[0159] Example 20. The method of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises (i) mapping a first channel of the n source channels to a first channel of the m downmixed channels (ii) mapping a first portion of a second channel of the n source channels to a first portion of a second channel of the m downmixed channels, and (iii) mapping a second portion of a third channel of the n source channels to a second portion of the second channel of the m downmixed channels.

[0160] Example 21. The method of any one of the Examples herein, wherein dow nmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises mapping a second portion of the second channel of the n source channels to the second portion of the second channel of the m downmixed channels such that the second portion of the second channel of the m downmixed channels comprises a combination of (i) the second portion of the second channel of the n source channels and the second portion of the third channel of the n source channels.

[0161] Example 22. The method of any one of the Examples herein, further comprising determining that wireless netw ork conditions are insufficient to transmit the n source channels of satellite audio data to the satellite playback device.

[0162] Example 23. One or more tangible, non- transitory' computer-readable media storing instructions that, when executed by one or more processors of a media playback system comprising a primary playback device and a satellite playback device, cause the media playback system to perform operations comprising: receiving, at a primary playback device, source audio data; obtaining n channels of satellite audio data from the source audio data to be played back via a satellite playback device; downmixing the n source channels of satellite audio data to m downmixed channels of audio satellite data according to a first downmixing scheme, wherein m<n, and wherein the first downmixing scheme is based at least in part on one or more first input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for play back.

[0163] Example 24. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: after wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary playback device, one or more second input parameters different from the one or more first input parameters; receiving, at the primary’ playback device, second source audio data; downmixing the n channels of satellite audio data to m channels of audio satellite data according to a second downmixing scheme different from the first, wherein the second dowTimixing scheme is based at least in part on one or more second input parameters; and wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback. [0164] Example 25. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: after wirelessly transmitting the downmixed m channels of audio data to the satellite playback device for playback, receiving, at the primary playback device, second source audio data; determining that network conditions are sufficient to transmit n channels of satellite audio data of the second source audio data to the satellite playback device; and wirelessly transmitting the n channels of satellite audio data of the second source audio data to the satellite playback device.

[0165] Example 26. The computer-readable media of any one of the Examples herein, wherein the one or more input parameters comprise one or more of: an audio content parameter, a device location parameter, a listener location parameter, a playback responsibilities parameter, or an environmental acoustics parameter.

[0166] Example 27. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: at the satellite playback device, upmixing the m channels of audio satellite data to generate n channels of audio satellite data; and playing back, via the satellite playback device, the upmixed n channels of audio data.

[0167] Example 28. The computer-readable media of any one of the Examples herein, wherein playing back the upmixed n channels of audio data comprises arraying the n channels to be output via a plurality of transducers of the satellite playback device such that each of the plurality of transducers outputs at least a portion of each of the n channels.

[0168] Example 29. The computer-readable media of any one of the Examples herein, further comprising playing back audio via the primary playback device in synchrony with playback of the upmixed n channels of audio data via the satellite playback device.

[0169] Example 30. The computer-readable media of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises combining audio content from each of the n channels below 7 a threshold frequency into a single one of the m downmixed channels.

[0170] Example 31. The computer-readable media of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises (i) mapping a first channel of the n source channels to a first channel of the m downmixed channels (ii) mapping a first portion of a second channel of the n source channels to a first portion of a second channel of the m dow nmixed channels, and (iii) mapping a second portion of a third channel of the n source channels to a second portion of the second channel of the m downmixed channels. [0171] Example 32. The computer-readable media of any one of the Examples herein, wherein downmixing the n channels of satellite audio data to m channels of audio satellite data according to the first downmixing scheme comprises mapping a second portion of the second channel of the n source channels to the second portion of the second channel of the m downmixed channels such that the second portion of the second channel of the m downmixed channels comprises a combination of (i) the second portion of the second channel of the n source channels and the second portion of the third channel of the n source channels.

[0172] Example 33. The computer-readable media of any one of the Examples herein, wherein the operations further comprise determining that wireless network conditions are insufficient to transmit the n source channels of satellite audio data to the satellite playback device.

[0173] Example 34. A media playback system comprising: a plurality of playback devices including at least a first playback device and a second playback device one or more processors; and data storage having instructions stored thereon that are executable by the one or more processors to cause the media playback system to perform operations comprising: receiving a request to form a bonded zone including the plurality of playback devices configured to synchronously play back audio content; determining, for a first playback parameter, a first value for each of the plurality of playback devices; determining, for a second playback parameter, a second value for each of the plurality of playback devices; based on the first playback parameter value for the first playback device, adjusting the first playback parameter value for the second playback device; based on the second playback parameter value for the second playback device, adjusting the second playback parameter value for the first playback device; and synchronously playing back audio content via the plurality of playback devices in the bonded zone.

[0174] Example 35. The media playback system of any one of the Examples herein, wherein the first playback parameter comprises a characteristic playback magnitude, and wherein the second playback parameter comprises a characteristic playback phase.

[0175] Example 36. The media playback system of any one of the Examples herein, wherein adjusting the first playback parameter value for the second playback device comprises adjusting the first playback parameter value for the second playback device to be closer to the first playback parameter value for the first playback device, and wherein adjusting the second playback parameter for the first playback device comprises adjusting the second playback parameter value for the second playback device to be closer to the second playback parameter value for the second playback device.

[0176] Example 37. The media playback system of any one of the Examples herein, wherein the operations further comprise: receiving a request to add a third playback device to the bonded zone; determining, for the first playback parameter, a first value for the third playback device; and based on the first playback parameter value for the third playback device, adjusting the first playback parameter value for each of the first playback device and the second playback device.

[0177] Example 38. The media playback system of any one of the Examples herein, wherein the operations further comprise: receiving a request to assign different playback responsibilities within the bonded zone to the first playback device; and based on the different playback responsibilities, adjusting the second playback parameter value for the first playback device. [0178] Example 39. The media playback system of any one of the Examples herein, wherein the operations further comprise: after determining the first playback parameter values, selecting the first playback device as a first reference device for the first playback parameter; and after determining the second playback parameter values, selecting the second playback device as a second reference device for the second playback parameter.

[0179] Example 40. The media play back system of any one of the Examples herein, wherein the operations further comprise: determining that the first playback device has moved its location; based on determining that the first playback device has moved, selecting a different playback device as the first reference device for the first playback parameter; and based on the first playback parameter value for the first reference device, adjusting the first playback parameter value for at least the first playback device.

[0180] Example 41. The media playback system of any one of the Examples herein, wherein determining, for the first playback parameter, a first value for each of the plurality of playback devices comprises capturing audio output via each of the plurality of playback devices via one or more microphones, and analyzing the captured audio output to determine the first playback parameter values.

[0181] Example 42. A method comprising: receiving a request to form a bonded zone including a plurality of playback devices configured to synchronously play back audio content, the plurality of playback devices including at least a first playback device and a second playback device; determining, for a first playback parameter, a first value for each of the plurality of playback devices; determining, for a second playback parameter, a second value for each of the plurality of playback devices; based on the first playback parameter value for the first playback device, adjusting the first playback parameter value for the second playback device; based on the second playback parameter value for the second playback device, adjusting the second playback parameter value for the first playback device; and synchronously playing back audio content via the plurality of playback devices in the bonded zone.

[0182] Example 43. The method of any one of the Examples herein, wherein the first playback parameter comprises a characteristic playback magnitude, and wherein the second playback parameter comprises a characteristic playback phase.

[0183] Example 44. The method of any one of the Examples herein, wherein adjusting the first playback parameter value for the second playback device comprises adjusting the first playback parameter value for the second playback device to be closer to the first playback parameter value for the first playback device, and wherein adjusting the second playback parameter for the first playback device comprises adjusting the second playback parameter value for the second playback device to be closer to the second playback parameter value for the second playback device.

[0184] Example 45. The method of any one of the Examples herein, further comprising: receiving a request to add a third playback device to the bonded zone; determining, for the first playback parameter, a first value for the third playback device; and based on the first playback parameter value for the third playback device, adjusting the first playback parameter value for each of the first playback device and the second playback device.

[0185] Example 46. The method of any one of the Examples herein, further comprising: receiving a request to assign different playback responsibilities within the bonded zone to the first playback device; and based on the different playback responsibilities, adjusting the second playback parameter value for the first playback device.

[0186] Example 47. The method of any one of the Examples herein, further comprising: after determining the first playback parameter values, selecting the first playback device as a first reference device for the first playback parameter; and after determining the second playback parameter values, selecting the second playback device as a second reference device for the second playback parameter.

[0187] Example 48. The method of any one of the Examples herein, further comprising: determining that the first playback device has moved its location; based on determining that the first playback device has moved, selecting a different playback device as the first reference device for the first playback parameter; and based on the first playback parameter value for the first reference device, adjusting the first playback parameter value for at least the first playback device.

[0188] Example 49. The method of any one of the Examples herein, wherein determining, for the first playback parameter, a first value for each of the plurality of playback devices comprises capturing audio output via each of the plurality of playback devices via one or more microphones, and analyzing the captured audio output to determine the first playback parameter values.

[0189] Example 50. One or more tangible, non-transitory computer-readable media storing instructions that, when executed by one or more processors of a media playback system, cause the media playback system to perform operations comprising: receiving a request to form a bonded zone including a plurality of playback devices configured to synchronously play back audio content, the plurality of playback devices comprising at least a first playback device and a second playback device; determining, for a first playback parameter, a first value for each of the plurality of playback devices; determining, for a second playback parameter, a second value for each of the plurality of playback devices; based on the first playback parameter value for the first playback device, adjusting the first playback parameter value for the second playback device; based on the second playback parameter value for the second playback device, adjusting the second playback parameter value for the first playback device; and synchronously- playing back audio content via the plurality of playback devices in the bonded zone.

[0190] Example 51. The computer-readable media of any one of the Examples herein, wherein the first playback parameter comprises a characteristic playback magnitude, and wherein the second playback parameter comprises a characteristic playback phase.

[0191] Example 52. The computer-readable media of any one of the Examples herein, wherein adjusting the first playback parameter value for the second playback device comprises adjusting the first playback parameter value for the second playback device to be closer to the first playback parameter value for the first playback device, and wherein adjusting the second playback parameter for the first playback device comprises adjusting the second playback parameter value for the second playback device to be closer to the second playback parameter value for the second playback device.

[0192] Example 53. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: receiving a request to add a third playback device to the bonded zone; determining, for the first playback parameter, a first value for the third playback device; and based on the first playback parameter value for the third playback device. adjusting the first playback parameter value for each of the first playback device and the second playback device.

[0193] Example 54. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: receiving a request to assign different playback responsibilities within the bonded zone to the first playback device; and based on the different playback responsibilities, adjusting the second playback parameter value for the first playback device.

[0194] Example 55. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: after determining the first playback parameter values, selecting the first playback device as a first reference device for the first playback parameter; and after determining the second playback parameter values, selecting the second playback device as a second reference device for the second playback parameter.

[0195] Example 56. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: determining that the first playback device has moved its location; based on determining that the first playback device has moved, selecting a different playback device as the first reference device for the first playback parameter; and based on the first playback parameter value for the first reference device, adjusting the first playback parameter value for at least the first playback device.

[0196] Example 57. The computer-readable media of any one of the Examples herein, wherein determining, for the first playback parameter, a first value for each of the plurality of playback devices comprises capturing audio output via each of the plurality of playback devices via one or more microphones, and analyzing the captured audio output to determine the first playback parameter values.

[0197] Example 58. A media playback system comprising: a plurality of playback devices including a rear satellite playback device; one or more processors; and data storage having instructions stored thereon that, when executed by the one or more processors, cause the media playback system to perform operations comprising: receiving, at the media playback, multichannel audio content comprising a side surround channel and a rear surround channel; playing back, via the rear satellite playback device, the side surround channel at a first magnitude; playing back, via the rear satellite playback device, the rear surround channel at a second magnitude; receiving a trigger indication; based on the trigger indication: playing back, via the rear satellite playback device, the side surround channel at a third magnitude lower than the first magnitude; and playing back, via the rear satellite playback device, the rear surround channel at a fourth magnitude greater than the second magnitude.

[0198] Example 59. The media playback system of any one of the Examples herein, wherein the trigger indication comprises a user input (e.g., user input via a controller device, voice input, etc.).

[0199] Example 60. The media playback system of any one of the Examples herein, wherein the trigger indication comprises detection of an environmental acoustics parameter (e.g., calibration, object detection in environment), a device position parameter, or a listener location parameter.

[0200] Example 61. The media playback system of any one of the Examples herein, wherein the rear satellite playback device comprises a plurality of audio transducers.

[0201] Example 62. The media playback system of any one of the Examples herein, wherein the relative playback magnitudes of the side surround channel and the rear surround channel determines a perceived width of the audio playback.

[0202] Example 63. The media playback system of any one of the Examples herein, wherein the operations further comprise: receiving a second trigger indication; and based on the trigger indication: playing back, via the rear satellite playback device, the side surround channel at a fifth magnitude lower than the third magnitude; and playing back, via the rear satellite playback device, the rear surround channel at a sixth magnitude greater than the fourth magnitude.

[0203] Example 64. The media playback system of any one of the Examples herein, wherein the side surround channel is a right side surround channel, the rear surround channel is a right rear surround channel, and the rear satellite playback device is a rear right satellite playback device, the media playback system further comprising a left rear satellite playback device, wherein the operations further comprise: while playing back, via the rear right satellite playback device, the right side surround channel at the first magnitude, playing back, via the left rear satellite playback device, a left side surround channel at the first magnitude; while playing back, via the rear right satellite playback device, the rear right surround channel at the second magnitude, playing back, via the left rear satellite playback device, a left rear surround channel at the second magnitude; based on the trigger indication: playing back, via the left rear satellite playback device, the left side surround channel at the third magnitude lower than the first magnitude; and playing back, via the left rear satellite playback device, the left rear surround channel at the fourth magnitude greater than the second magnitude. [0204] Example 65. A method comprising: receiving, at a media playback system comprising a plurality of playback devices including a rear satellite playback device, multi-channel audio content comprising a side surround channel and a rear surround channel; playing back, via the rear satellite playback device, the side surround channel at a first magnitude; playing back, via the rear satellite playback device, the rear surround channel at a second magnitude; receiving a trigger indication; based on the trigger indication: playing back, via the rear satellite playback device, the side surround channel at a third magnitude lower than the first magnitude; and playing back, via the rear satellite playback device, the rear surround channel at a fourth magnitude greater than the second magnitude.

[0205] Example 66. The method of any one of the Examples herein, wherein the trigger indication comprises a user input (e.g., user input via a controller device, voice input, etc.) [0206] Example 67. The method of any one of the Examples herein, wherein the trigger indication comprises detection of an environmental acoustics parameter (e.g., calibration, object detection in environment), a device position parameter, or a listener location parameter. [0207] Example 68. The method of any one of the Examples herein, wherein the rear satellite playback device comprises a plurality of audio transducers.

[0208] Example 69. The method of any one of the Examples herein, wherein the relative playback magnitudes of the side surround channel and the rear surround channel determines a perceived width of the audio playback.

[0209] Example 70. The method of any one of the Examples herein, further comprising: receiving a second trigger indication; and based on the second trigger indication: playing back, via the rear satellite playback device, the side surround channel at a fifth magnitude lower than the third magnitude; and playing back, via the rear satellite play back device, the rear surround channel at a sixth magnitude greater than the fourth magnitude.

[0210] Example 71. The method of any one of the Examples herein, wherein the side surround channel is a right side surround channel, the rear surround channel is a right rear surround channel, and the rear satellite playback device is a rear right satellite playback device, the method further comprising: while playing back, via the rear right satellite playback device, the right side surround channel at the first magnitude, playing back, via a left rear satellite playback device, a left side surround channel at the first magnitude; while playing back, via the rear right satellite playback device, the rear right surround channel at the second magnitude, playing back, via the left rear satellite playback device, a left rear surround channel at the second magnitude; based on the trigger indication: playing back, via the left rear satellite playback device, the left side surround channel at the third magnitude lower than the first magnitude; and playing back, via the left rear satellite playback device, the left rear surround channel at the fourth magnitude greater than the second magnitude.

[0211] Example 72. One or more tangible, non-transitory computer-readable media storing instructions thereon that, when executed by one or more processors of a media playback system, cause the media playback system to perform operations comprising: receiving, at the media playback, multi-channel audio content comprising a side surround channel and a rear surround channel; playing back, via a rear satellite playback device of the media playback system, the side surround channel at a first magnitude; playing back, via the rear satellite playback device, the rear surround channel at a second magnitude; receiving a trigger indication; based on the trigger indication: playing back, via the rear satellite playback device, the side surround channel at a third magnitude lower than the first magnitude; and playing back, via the rear satellite playback device, the rear surround channel at a fourth magnitude greater than the second magnitude.

[0212] Example 73. The computer-readable media of any one of the Examples herein, wherein the trigger indication comprises a user input (e.g., user input via a controller device, voice input, etc.).

[0213] Example 74. The computer-readable media of any one of the Examples herein, wherein the trigger indication comprises detection of an environmental acoustics parameter (e.g., calibration, object detection in environment), a device position parameter, or a listener location parameter.

[0214] Example 75. The computer-readable media of any one of the Examples herein, wherein the rear satellite playback device comprises a plurality of audio transducers.

[0215] Example 76. The computer-readable media of any one of the Examples herein, wherein the relative playback magnitudes of the side surround channel and the rear surround channel determines a perceived width of the audio playback.

[0216] Example 77. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: receiving a second trigger indication; and based on the second trigger indication: playing back, via the rear satellite playback device, the side surround channel at a fifth magnitude lower than the third magnitude; and playing back, via the rear satellite playback device, the rear surround channel at a sixth magnitude greater than the fourth magnitude. [0217] Example 78. The computer-readable media of any one of the Examples herein, wherein the side surround channel is a right side surround channel, the rear surround channel is a right rear surround channel, and the rear satellite playback device is a rear right satellite playback device, the media playback system further comprising a left rear satellite playback device, wherein the operations further comprise: while playing back, via the rear right satellite playback device, the right side surround channel at the first magnitude, playing back, via the left rear satellite playback device, a left side surround channel at the first magnitude; while playing back, via the rear right satellite playback device, the rear right surround channel at the second magnitude, playing back, via the left rear satellite playback device, a left rear surround channel at the second magnitude; based on the trigger indication: playing back, via the left rear satellite playback device, the left side surround channel at the third magnitude lower than the first magnitude; and playing back, via the left rear satellite playback device, the left rear surround channel at the fourth magnitude greater than the second magnitude.

[0218] Example 79. A media playback system comprising: a plurality of playback devices including a front center playback device and a front satellite playback device; one or more processors; and data storage having instructions stored thereon that, when executed by the one or more processors, cause the media playback system to perform operations comprising: receiving, at the media playback system, multi-channel audio content comprising a front surround channel; playing back the front surround channel via only the front satellite playback device; receiving a trigger indication; and based on the trigger indication, playing back the front surround channel via both the front satellite playback device and the center front playback device in synchrony.

[0219] Example 80. The media playback system of any one of the Examples herein, wherein the trigger indication comprises a user input (e.g., user input via a controller device, voice input, etc.).

[0220] Example 81. The media playback system of any one of the Examples herein, wherein the trigger indication comprises detection of an environmental acoustics parameter (e.g., calibration, object detection in environment), a device position parameter, or a listener location parameter.

[0221] Example 82. The media playback system of any one of the Examples herein, wherein the operations further comprise: receiving a second trigger indication; and based on the second trigger indication, decreasing a playback magnitude of the front surround channel via the front satellite playback device and increasing a playback magnitude of the front surround channel via the center front playback device.

[0222] Example 83. The media playback system of any one of the Examples herein, wherein the relative playback magnitudes of the front surround channel via the front satellite playback device and via the center front playback device determines a perceived width of the audio playback.

[0223] Example 84. The media playback system of any one of the Examples herein, wherein the front surround channel is a front right surround channel, and the front satellite playback device is a front right satellite playback device, the operations further comprising: while playing back the front right surround channel via only the front right satellite playback device, synchronously playing back a front left surround channel via only a front left satellite playback device; and while playing back the front right surround channel via both the front right satellite playback device and the center front playback device in synchrony, synchronously playing back the front left surround channel via both the front left satellite playback device and the center front playback device.

[0224] Example 85. The media playback system of any one of the Examples herein, wherein the center front playback device comprises a soundbar having a plurality of transducers.

[0225] Example 86. A method comprising: receiving, at a media playback system comprising a plurality of playback devices including a front satellite playback device, multi-channel audio content comprising a front surround channel; playing back the front surround channel via only the front satellite playback device; receiving a trigger indication; and based on the trigger indication, playing back the front surround channel via both the front satellite playback device and a center front playback device in synchrony.

[0226] Example 87. The method of any one of the Examples herein, wherein the trigger indication comprises a user input (e.g., user input via a controller device, voice input, etc.).

[0227] Example 88. The method of any one of the Examples herein, wherein the trigger indication comprises detection of an environmental acoustics parameter (e.g., calibration, object detection in environment), a device position parameter, or a listener location parameter. [0228] Example 89. The method of any one of the Examples herein, further comprising: receiving a second trigger indication; and based on the second trigger indication, decreasing a playback magnitude of the front surround channel via the front satellite playback device and increasing a playback magnitude of the front surround channel via the center front playback device. [0229] Example 90. The method of any one of the Examples herein, wherein the relative playback magnitudes of the front surround channel via the front satellite playback device and via the center front playback device determines a perceived width of the audio playback.

[0230] Example 91. The method of any one of the Examples herein, wherein the front surround channel is a front right surround channel, and the front satellite playback device is a front right satellite playback device, the method further comprising: while playing back the front right surround channel via only the front right satellite playback device, synchronously playing back a front left surround channel via only a front left satellite playback device; and while playing back the front right surround channel via both the front right satellite playback device and the center front playback device in synchrony, synchronously playing back the front left surround channel via both the front left satellite playback device and the center front playback device.

[0231] Example 92. The method of any one of the Examples herein, wherein the center front playback device comprises a soundbar having a plurality of transducers.

[0232] Example 93. One or more tangible, non- transitory’ computer-readable media storing instructions that when executed by one or more processors of a media playback system, cause the media playback system to perform operations comprising: receiving, at the media playback system, multi-channel audio content comprising a front surround channel; playing back the front surround channel via only a front satellite playback device of the media playback system; receiving a trigger indication; and based on the trigger indication, playing back the front surround channel via both the front satellite playback device and a center front playback device of the media playback system in synchrony.

[0233] Example 94. The computer-readable media of any one of the Examples herein, wherein the trigger indication comprises a user input (e.g.. user input via a controller device, voice input, etc.).

[0234] Example 95. The computer-readable media of any one of the Examples herein, wherein the trigger indication comprises detection of an environmental acoustics parameter (e.g., calibration, object detection in environment), a device position parameter, or a listener location parameter.

[0235] Example 96. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: receiving a second trigger indication; and based on the second trigger indication, decreasing a playback magnitude of the front surround channel via the front satellite playback device and increasing a playback magnitude of the front surround channel via the center front playback device.

[0236] Example 97. The computer-readable media of any one of the Examples herein, wherein the relative playback magnitudes of the front surround channel via the front satellite playback device and via the center front playback device determines a perceived width of the audio playback.

[0237] Example 98. The computer-readable media of any one of the Examples herein, wherein the front surround channel is a front right surround channel, and the front satellite playback device is a front right satellite playback device, the operations further comprising: while playing back the front right surround channel via only the front right satellite playback device, synchronously playing back a front left surround channel via only a front left satellite playback device; and while playing back the front right surround channel via both the front right satellite playback device and the center front playback device in synchrony, synchronously playing back the front left surround channel via both the front left satellite playback device and the center front playback device.

[0238] Example 99. The computer-readable media of any one of the Examples herein, wherein the center front playback device comprises a soundbar having a plurality of transducers.

[0239] Example 100. A playback device comprising: a plurality of audio transducers configured to output audio along a plurality of sound axes including at least a forward-firing axis and a side-firing axis; one or more processors; and data storage having instructions thereon that, when executed by the one or more processors, cause the playback device to perform operations comprising: receiving multichannel audio content including a first audio channel; playing back at least a first proportion of the first audio channel via the forward-firing axis; obtaining an indication of orientation of the playback device relative to the environment; and based at least in part on the orientation indication, modifying audio playback such that at least a second proportion of the first channel is played back via the side-firing axis rather than via the forward-firing axis.

[0240] Example 101. The playback device of any one of the Examples herein, wherein the operations further comprise: while playing back at least the first proportion of the first audio channel via the forward-firing axis, playing back none of the first audio channel via the sidefiring axis; and while playing back at least the second proportion of the first audio channel via the side-firing axis, playing back none of the first audio channel via the forward-firing axis.

'll [0241] Example 102. The playback device of any one of the Examples herein, wherein the operations further comprise: while playing back at least the first proportion of the first audio channel via the forward-firing axis, playing back a third proportion of the first audio channel via the side-firing axis; and while playing back at least the second portion of the first audio channel via the side-firing axis, playing back a fourth proportion of the first audio channel via the forward-firing axis, wherein the first proportion is greater than the fourth proportion, and the second proportion is greater than the third proportion.

[0242] Example 103. The playback device of any one of the Examples herein, wherein the operations further comprise modifying audio playback based at least in part on the orientation indication such that a third proportion of the first channel is output via the forward-firing axis, the third proportion being less than the first proportion.

[0243] Example 104. The playback device of any one of the Examples herein, wherein the operations further comprise modifying audio playback based at least in part on the orientation indication such that none of the first audio channel is output via the forward-firing axis.

[0244] Example 105. The playback device of any one of the Examples herein, wherein the playback device is a front left satellite playback device, the first channel is a front left channel, and the side-firing axis is a right side-firing axis, and wherein the indication of orientation reflects that the forward-firing axis is oriented forward of an intended listening location within the environment.

[0245] Example 106. The playback device of any one of the Examples herein, wherein the right side-firing axis is oriented nearer to the intended listening location than the forward-firing axis.

[0246] Example 107. The playback device of any one of the Examples herein, wherein the playback device is a front left satellite playback device, the first channel is a front left channel, and the side-firing axis is a left side-firing axis, and wherein the indication of orientation reflects that the forward-firing axis is oriented rearward of an intended listening location within the environment.

[0247] Example 108. The playback device of any one of the Examples herein, wherein the left side-firing axis is oriented nearer to the intended listening location than the forward-firing axis.

[0248] Example 109. The playback device of any one of the Examples herein, wherein obtaining an indication of orientation comprises determining an angular orientation of the playback device. [0249] Example 110. The playback device of any one of the Examples herein, wherein obtaining an indication of orientation comprises receiving, via the network interface, an angular orientation of the playback device.

[0250] Example 111. A method comprising: a plurality of audio transducers configured to output audio along a plurality of sound axes including at least a forward-firing axis and a sidefiring axis; receiving, at a playback device, multichannel audio content including a first audio channel, the playback device comprising a plurality of audio transducers configured to output audio along a plurality of sound axes including at least a forward-firing axis and a side-firing axis, playing back at least a first proportion of the first audio channel via the forward-firing axis; obtaining an indication of orientation of the playback device relative to the environment; and based at least in part on the orientation indication, modifying audio playback such that at least a second proportion of the first channel is played back via the side-firing axis rather than via the forward-firing axis.

[0251] Example 112. The method of any one of the Examples herein, further comprising: while playing back at least the first proportion of the first audio channel via the forward-firing axis, playing back none of the first audio channel via the side-firing axis; and while playing back at least the second proportion of the first audio channel via the side-firing axis, playing back none of the first audio channel via the forward-firing axis.

[0252] Example 113. The method of any one of the Examples herein, further comprising: while playing back at least the first proportion of the first audio channel via the forward-firing axis, playing back a third proportion of the first audio channel via the side-firing axis; and while playing back at least the second portion of the first audio channel via the side-firing axis, playing back a fourth proportion of the first audio channel via the forward-firing axis, wherein the first proportion is greater than the fourth proportion, and the second proportion is greater than the third proportion.

[0253] Example 114. The method of any one of the Examples herein, further comprising comprise modifying audio playback based at least in part on the orientation indication such that a third proportion of the first channel is output via the forward-firing axis, the third proportion being less than the first proportion.

[0254] Example 115. The method of any one of the Examples herein, further comprising modifying audio playback based at least in part on the orientation indication such that none of the first audio channel is output via the forward-firing axis. [0255] Example 116. The method of any one of the Examples herein, wherein the playback device is a front left satellite playback device, the first channel is a front left channel, and the side-firing axis is a right side-firing axis, and wherein the indication of orientation reflects that the forward-firing axis is oriented forward of an intended listening location within the environment.

[0256] Example 117. The method of any one of the Examples herein, wherein the right sidefiring axis is oriented nearer to the intended listening location than the forward-firing axis.

[0257] Example 118. The method of any one of the Examples herein, wherein the playback device is a front left satellite playback device, the first channel is a front left channel, and the side-firing axis is a left side-firing axis, and wherein the indication of orientation reflects that the forward-firing axis is oriented rearward of an intended listening location within the environment.

[0258] Example 119. The method of any one of the Examples herein, wherein the left sidefiring axis is oriented nearer to the intended listening location than the forw ard-firing axis.

[0259] Example 120. The method of any one of the Examples herein, wherein obtaining an indication of orientation comprises determining an angular orientation of the playback device. [0260] Example 121. The method of any one of the Examples herein, w herein obtaining an indication of orientation comprises receiving, via the netw ork interface, an angular orientation of the playback device.

[0261] Example 122. One or more tangible, non-transitory computer-readable media storing instructions thereon that, when executed by one or more processors of a playback device configured to output audio along a plurality of sound axes including at least a forward-firing axis and a side-firing axis, cause the playback device to perform operations comprising: receiving multichannel audio content including a first audio channel; playing back at least a first proportion of the first audio channel via the forward-firing axis: obtaining an indication of orientation of the playback device relative to the environment; and based at least in part on the orientation indication, modifying audio playback such that at least a second proportion of the first channel is played back via the side-firing axis rather than via the forward-firing axis.

[0262] Example 123. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: while playing back at least the first proportion of the first audio channel via the forw ard-firing axis, playing back none of the first audio channel via the side-firing axis; and while playing back at least the second proportion of the first audio channel via the side-firing axis, playing back none of the first audio channel via the forwardfiring axis.

[0263] Example 124. The computer-readable media of any one of the Examples herein, wherein the operations further comprise: while playing back at least the first proportion of the first audio channel via the forward-firing axis, playing back a third proportion of the first audio channel via the side-firing axis; and while playing back at least the second portion of the first audio channel via the side-firing axis, playing back a fourth proportion of the first audio channel via the forward-firing axis, wherein the first proportion is greater than the fourth proportion, and the second proportion is greater than the third proportion.

[0264] Example 125. The computer-readable media of any one of the Examples herein, wherein the operations further compnse modifying audio playback based at least in part on the orientation indication such that a third proportion of the first channel is output via the forwardfiring axis, the third proportion being less than the first proportion.

[0265] Example 126. The computer-readable media of any one of the Examples herein, wherein the operations further comprise modifying audio playback based at least in part on the orientation indication such that none of the first audio channel is output via the forward-firing axis.

[0266] Example 127. The computer-readable media of any one of the Examples herein, wherein the playback device is a front left satellite playback device, the first channel is a front left channel, and the side-firing axis is a right side-firing axis, and wherein the indication of orientation reflects that the forward-firing axis is oriented forward of an intended listening location w ithin the environment.

[0267] Example 128. The computer-readable media of any one of the Examples herein, wherein the right side-firing axis is oriented nearer to the intended listening location than the forward-firing axis.

[0268] Example 129. The computer-readable media of any one of the Examples herein, wherein the playback device is a front left satellite playback device, the first channel is a front left channel, and the side-firing axis is a left side-firing axis, and wherein the indication of orientation reflects that the forward-firing axis is oriented rearward of an intended listening location within the environment.

[0269] Example 130. The computer-readable media of any one of the Examples herein, wherein the left side-firing axis is oriented nearer to the intended listening location than the forward-firing axis. [0270] Example 131. The computer-readable media of any one of the Examples herein, wherein obtaining an indication of orientation comprises determining an angular orientation of the playback device.

[0271] Example 132. The computer-readable media of any one of the Examples herein, wherein obtaining an indication of orientation comprises receiving, via the network interface, an angular orientation of the playback device.

[0272] Example 133. A playback device comprising: a forward-firing transducer configured to direct sound along a first sound axis; an up-firing transducer configured to direct sound along a second sound axis that is vertically angled with respect to the first sound axis; a side-firing transducer or array configured to direct sound along a third axis that is horizontally angled with respect to the first sound axis; one or more processors; and data storage having instructions stored thereon that, when executed by the one or more processors, cause the playback device to perform operations comprising: receiving, at the playback device, audio input including a vertical content signal; playing back audio based on the vertical content signal via at least the up-firing transducer and the side-firing transducer or array; and playing back a null signal via the forward-firing transducer, wherein the null signal cancels out a portion of the played back vertical content signal along the first sound axis.

[0273] Example 134. The playback device of any one of the Examples herein, wherein the playback device is configured to perform the operations while in a first standalone playback mode, and wherein the playback device is further configured to perform second operations while in a second playback mode in which the playback device is bonded with a second playback device for synchronous playback, the second operations comprising: playing back the vertical content signal via at least the up-firing transducer; and playing back the null signal via at least the side-firing transducer or array, wherein the null signal cancels out the portion of the vertical content signal along the first sound axis.

[0274] Example 135. The playback device of any one of the Examples herein wherein playing back the vertical content signal via at least the up-firing transducer comprises playing back audio based on the vertical content via the up-firing transducer and the forward-firing transducer.

[0275] Example 136. The playback device of any one of the Examples herein, wherein the null signal is restricted to a frequency band that includes 1 kHz, the frequency band having a bandwidth that is less than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz. [0276] Example 137. The playback device of any one of the Examples herein, wherein the null signal is restricted to a frequency band that includes 1 kHz, the frequency band having a bandwidth that is greater than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz.

[0277] Example 138. The playback device of any one of the Examples herein, wherein the null signal is restricted to a frequency band that excludes frequencies below a lower threshold frequency, and wherein the lower threshold frequency is greater than about 200 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz, 700 Hz, 800 Hz, 900 Hz, or 1 kHz.

[0278] Example 139. The playback device of any one of the Examples herein, wherein the null signal is restricted to a frequency band that excludes frequencies above an upper threshold frequency, and wherein the upper threshold frequency less than about 1.0 kHz, 1.5 kHz, 2.0 kHz, 2.5 kHz, 3.0 kHz, 3.5 kHz, or 4.0 kHz.

[0279] Example 140. The playback device of any one of the Examples herein, wherein the null signal comprises the vertical content signal being phase-shifted such that the null signal destructively interferes with the portion of the audio played back based on the vertical content signal along the first sound axis.

[0280] Example 141. The playback device of any one of the Examples herein, wherein playing back the null signal via the forward-firing transducer is delayed with respect to playing back audio based on the vertical content signal via at least the up-firing transducer and the sidefiring transducer or array.

[0281] Example 142. A method of playing back audio content comprising: receiving, at a playback device, audio input including a vertical content signal, wherein the audio playback device comprises a forward-firing transducer configured to direct sound along a first sound axis, an up-firing transducer configured to direct sound along a second sound axis that is vertically angled with respect to the first sound axis, and a side-firing transducer or array configured to direct sound along a third axis that is horizontally angled w ith respect to the first sound axis; playing back audio based on the vertical content signal via at least the up-firing transducer and the side-firing transducer or array; and playing back a null signal via the forward-firing transducer, wherein the null signal cancels out a portion of the audio played back based on the vertical content signal along the first sound axis.

[0282] Example 143. The method of any one of the Examples herein, w herein the preceding operations are performed while the playback device is in a first standalone playback mode, the method further comprising transitioning to a second playback mode in which the playback device is bonded with a second playback device for synchronous playback, and while in the second mode: playing back audio based on the vertical content signal via at least the up-firing transducer; and playing back the null signal via at least the side-firing transducer or array, wherein the null signal cancels out the portion of the vertical content signal along the first sound axis.

[0283] Example 144. The method of any one of the Examples herein, wherein playing back audio based on the vertical content signal via at least the up-firing transducer comprises playing back audio based on the vertical content signal via the up-firing transducer and the forwardfiring transducer.

[0284] Example 145. The method of any one of the Examples herein, wherein the null signal is restricted to a frequency band that includes 1 kHz, the frequency band having a bandwidth that is less than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz.

[0285] Example 146. The method of any one of the Examples herein, wherein the null signal is restricted to a frequency band that includes 1 kHz, the frequency band having a bandwidth that is greater than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz.

[0286] Example 147. The method of any one of the Examples herein, wherein the null signal is restricted to a frequency band that excludes frequencies below a lower threshold frequency, and wherein the lower threshold frequency is greater than about 200 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz, 700 Hz, 800 Hz, 900 Hz, or 1 kHz.

[0287] Example 148. The method of any one of the Examples herein, wherein the null signal is restricted to a frequency band that excludes frequencies above an upper threshold frequency, and wherein the upper threshold frequency less than about 1.0 kHz, 1.5 kHz, 2.0 kHz. 2.5 kHz, 3.0 kHz, 3.5 kHz, or 4.0 kHz.

[0288] Example 149. The method of any one of the Examples herein, wherein the null signal comprises the vertical content signal being phase-shifted such that the null signal destructively interferes with the portion of the audio played back based on the vertical content signal along the first sound axis.

[0289] Example 150. The method of any one of the Examples herein, wherein playing back the null signal via the forward-firing transducer is delayed with respect to playing back audio based on the vertical content signal via at least the up-firing transducer and the side-firing transducer or array. [0290] Example 151. One or more tangible, non-transitory computer-readable media storing instructions that, when executed by one or more processors of a playback device, cause the playback device to perform operations comprising: a forward-firing transducer configured to direct sound along a first sound axis; an up-firing transducer configured to direct sound along a second sound axis that is vertically angled with respect to the first sound axis; a side-firing transducer or array configured to direct sound along a third axis that is horizontally angled with respect to the first sound axis; receiving, at the playback device, audio input including a vertical content signal; playing back audio based on the vertical content signal via at least an up-firing transducer and a side-firing transducer or array of the playback device; and playing back a null signal via a forward-firing transducer of the playback device, the forward-firing transducer being configured to direct sound along a first sound axis, wherein the null signal cancels out a portion of the audio played back based on the vertical content signal along the first sound axis. [0291] Example 152. The computer-readable media of any one of the Examples herein, wherein the playback device is configured to perform the operations while in a first standalone playback mode, and wherein the playback device is further configured to perform second operations while in a second playback mode in which the playback device is bonded with a second playback device for synchronous playback, the second operations comprising: playing back audio based on the vertical content signal via at least the up-firing transducer; and playing back the null signal via at least the side-firing transducer or array, wherein the null signal cancels out the portion of the vertical content signal along the first sound axis.

[0292] Example 153. The computer-readable media of any one of the Examples herein, wherein playing back audio based on the vertical content signal via at least the up-firing transducer comprises playing back audio based on the vertical content signal via the up-firing transducer and the forward-firing transducer.

[0293] Example 154. The computer-readable media of any one of the Examples herein, wherein the null signal is restricted to a frequency band that includes 1 kHz, the frequency band having a bandwidth that is less than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz.

[0294] Example 155. The computer-readable media of any one of the Examples herein, wherein the null signal is restricted to a frequency band that includes 1 kHz, the frequency band having a bandwidth that is greater than about 5.0 kHz, 4.5 kHz, 4.0 kHz, 3.5 kHz, 3.0 kHz, 2.5 kHz, 2.0 kHz, 1.5 kHz, 1.0 kHz, or 0.5 kHz. [0295] Example 156. The computer-readable media of any one of the Examples herein, wherein the null signal is restricted to a frequency band that excludes frequencies below a lower threshold frequency, and wherein the lower threshold frequency is greater than about 200 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz, 700 Hz, 800 Hz, 900 Hz, or 1 kHz.

[0296] Example 157. The computer-readable media of any one of the Examples herein, wherein the null signal is restricted to a frequency band that excludes frequencies above an upper threshold frequency, and wherein the upper threshold frequency less than about 1.0 kHz, 1.5 kHz, 2.0 kHz, 2.5 kHz, 3.0 kHz, 3.5 kHz, or 4.0 kHz.

[0297] Example 158. The computer-readable media of any one of the Examples herein, wherein the null signal comprises the vertical content signal being phase-shifted such that the null signal destructively interferes with the portion of the audio played back based on the vertical content along the first sound axis.

[0298] Example 159. The computer-readable media of any one of the Examples herein, wherein playing back the null signal via the forward-firing transducer is delayed with respect to playing back audio based on the vertical content signal via at least the up-firing transducer and the side-firing transducer or array.