Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TOUCH SENSITIVE AUDIO SURFACE
Document Type and Number:
WIPO Patent Application WO/2022/075976
Kind Code:
A1
Abstract:
Systems, methods, and devices for detecting a touch on an audio surface are disclosed. A method includes energizing one or more transducers among a plurality of transducers to cause vibration of a panel within a first range of frequencies. The panel extends in a plane and each of the plurality of transducers is coupled to the panel at a corresponding location in the plane. The method includes obtaining amplitude data representing an amplitude of panel vibration within the first range of frequencies at the respective locations of each of the transducers from which the amplitude data is obtained; determining, based on the amplitude data representing the amplitude of panel vibration at the location of each of the transducers, a location in the plane of a damping force applied to the panel; and generating an output signal based on the location of the damping force applied to the panel.

Inventors:
SHIN DONGEEK (US)
GUO JIAN (US)
PATEL SHWETAK NARAN (US)
Application Number:
PCT/US2020/054411
Publication Date:
April 14, 2022
Filing Date:
October 06, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (US)
International Classes:
H04R1/40; G06F3/043
Domestic Patent References:
WO2001048684A22001-07-05
WO2013075137A12013-05-23
Foreign References:
US20170235434A12017-08-17
Attorney, Agent or Firm:
DIETRICH, Allison W. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method comprising: energizing one or more transducers among a plurality of transducers to cause vibration of a panel within a first range of frequencies, wherein the panel extends in a plane and each of the plurality of transducers is coupled to the panel at a corresponding location in the plane; obtaining, from one or more of the plurality of transducers, amplitude data representing an amplitude of panel vibration within the first range of frequencies at the respective locations of each of the one or more transducers from which the amplitude data is obtained; determining, based on the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained, a location in the plane of a damping force applied to the panel; and generating an output signal based on the location of the damping force applied to the panel.

2. The method of claim 1, wherein determining the location of the damping force comprises: providing, to a machine learning model, the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained; and receiving an output from the machine learning model, the output indicating the location of the damping force.

3. The method of any one of claims 1-2, wherein the first range of frequencies comprises ultrasonic frequencies.

4. The method of any one of claims 1-3, wherein the transducers are arranged in an array in the plane.

5. The method of claim 4, wherein one of the transducers causing vibration of the panel is located in a center of the array of transducers.

6. The method of any one of claims 4-5, wherein: the array comprises a two-dimensional array, and determining the location of the damping force applied to the panel comprises determining a two-dimensional coordinate in the plane that corresponds to the location of the damping force.

7. The method of any one of claims 1-6, wherein the one or more transducers causing vibration of the panel is also one of the transducers obtaining the amplitude data.

8. The method of any one of claims 1-7, wherein the damping force is caused by a touch of a user’s body member on a surface of the panel.

9. The method of any one of claims 1-8, comprising: energizing at least one of the transducers to cause vibration of the panel within a second range of frequencies different from the first range of frequencies.

10. The method of claim 9, wherein the second range of frequencies comprise audible frequencies.

11. The method of any one of claims 1-10, wherein the first range of frequencies comprise ultrasound frequencies.

12. A device comprising: a device housing defining a space for accommodating one or more electronic components, the device housing comprising a panel having a surface facing away from the space; and an electronic control module accommodated in the space and coupled to the panel, the electronic control module being programmed to perform operations comprising: energizing one or more transducers among a plurality of transducers to cause vibration of a panel within a first range of frequencies, wherein the panel extends in a plane and each of the plurality of transducers is coupled to the panel at a corresponding location in the plane; obtaining, from one or more of the plurality of transducers, amplitude data representing an amplitude of panel vibration within the first range of frequencies at the respective locations of each of the one or more transducers from which the amplitude data is obtained; and determining, based on the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained, a location in the plane of a damping force applied to the panel; and generating an output signal based on the location of the damping force applied to the panel.

13. The device of claim 12, wherein the output signal causes the device to perform one or more actions.

14. The device of claim 13, wherein the one or more actions include at least one of moving an image displayed on the panel, updating a user interface displayed on the panel, or adjusting a characteristic of audio produced by the panel.

15. The device of any one of claims 12-14, wherein the panel comprises a glass material.

16. The device of any one of claims 12-15, wherein the panel is a curved panel.

17. The device of any one of claims 12-16, wherein the panel is a display panel.

18. A system comprising: a panel extending in a plane; a plurality of transducers, each of the plurality of transducers being coupled to the panel at a corresponding location in the plane; and an electronic control module configured to perform operations comprising: energizing one or more transducers among the plurality of transducers to cause vibration of a panel within a first range of frequencies; obtaining, from one or more of the plurality of transducers, amplitude data representing an amplitude of panel vibration within the first range of frequencies at the respective locations of each of the one or more transducers from which the amplitude data is obtained; determining, based on the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained, a location in the plane of a damping force applied to the panel; and generating an output signal based on the location of the damping force applied to the panel.

19. The system of claim 18, wherein determining the location of the damping force comprises: providing, to a machine learning model, the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained; and receiving an output from the machine learning model, the output indicating the location of the damping force.

20. The system of any one of claims 18-19, wherein the output signal causes the system to perform one or more actions including at least one of moving an image displayed on the panel, updating a user interface displayed on the panel, or adjusting a characteristic of audio produced by the panel.

Description:
TOUCH SENSITIVE AUDIO SURFACE

TECHNICAL FIELD

[0001] This disclosure generally relates to touch sensitive devices.

BACKGROUND

[0002] In general, touch sensitive panels can be used to detect a user’s touch on a surface of a device. Various consumer electronic devices such as laptops and smart phones contain touch sensitive surfaces to detect user input through a touch. Capacitive touch sensors are typically used for consumer touch applications.

SUMMARY

[0003] This disclosure features touch sensitive audio surfaces for electronic devices. The touch sensitive surface uses an array of transducers to detect a touch from a user while simultaneously functioning as an audio speaker.

[0004] Flat-panel audio technology reduces the need for component speakers near the bezel of consumer electronics such as televisions, flat panel speakers, smart phones, tablets, and laptops. Flat panel audio technology, e.g., panel audio loudspeakers, can use transducers coupled to a panel to apply fast, mechanical excitation to the panel to create sound waves. The transducers can also be used to sense touches on the panel without using a dedicated touch sensor. The panel may be a display panel or a panel with no display capabilities.

[0005] In a panel audio loudspeaker, or distributed mode loudspeaker, one or more small vibrational excitation sources are located below the surface of the panel, which typically includes a glass material. The vibrational excitation sources may include an array of electrical-to-mechanical transducers. Vibrating the transducers at frequencies provided by a source audio file causes the panel to vibrate. The panel generates pressure waves which travel through the air to a user’s ear. Thus the panel functions as a speaker to play music, speech, etc.

[0006] In general, a single transducer may be able to transmit and receive energy in a wide range of frequencies. For example, a single transducer may be able to transmit and receive ultrasonic energy while also producing vibrations within audible frequency ranges. The techniques described in this disclosure use transducers to detect vibration of a panel in one frequency range (e.g., an ultrasonic frequency range), while simultaneously producing vibrations in another (e.g., audible sound waves). Using ultrasonic frequencies to detect a touch location reduces interference with audio frequencies that may simultaneously be produced by panel audio loudspeaker.

[0007] To vibrate the panel at ultrasonic frequencies, one or more transducers coupled to the panel may be activated to excite the panel within a programmed frequency range, e.g., using a pilot tone. Additional transducers coupled to the panel, and arranged in an array, may detect the vibration and measure the vibrational amplitude of the panel within the programmed frequency range. When a user touches the surface of the panel, the touch dampens the ultrasonic vibration of the panel near the touch. This changes the vibrational response of the panel as measured by the transducers.

[0008] Based on the change in the vibrational response, the panel audio system can determine the location of the touch on the panel. For example, the measured vibrational response of the panel can be provided to a neural network machine learning model that is trained to recognize vibrational patterns caused by damping forces applied at various locations of the panel. The neural network model can output an estimated coordinate location of the touch on the panel.

[0009] The technologies described can be implemented, for example, in a television. A user may watch a television program, with audio from the television program being played by the panel audio loudspeaker of the television. The user may touch the panel of the television, e.g., by touching, tapping, or swiping the panel with a finger or stylus. The transducers used to play the music for the television can also be used to detect the user’s touch and the pattern of the touch (e.g., swipes, pinches, etc.). In response to detecting the user’s touch, the transducers may generate an output signal, e.g., a touch signal that causes the television to perform an action. For example, the television may start or stop the television program or adjust a characteristic of the audio based on the touch signal. In an example, a user may touch the panel of the television with a finger and swipe the finger in an upward direction along the panel. In response to detecting the touch of the user swiping upward along the panel, the television may increase the volume of audio being produced by the panel audio loudspeaker. [0010] In general, in a first aspect, a method includes energizing one or more transducers among a plurality of transducers to cause vibration of a panel within a first range of frequencies. The panel extends in a plane and each of the plurality of transducers is coupled to the panel at a corresponding location in the plane; obtaining, from one or more of the plurality of transducers, amplitude data representing an amplitude of panel vibration within the first range of frequencies at the respective locations of each of the one or more transducers from which the amplitude data is obtained; determining, based on the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained, a location in the plane of a damping force applied to the panel; and generating an output signal based on the location of the damping force applied to the panel.

[0011] The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. In some implementations, determining the location of the damping force includes providing, to a machine learning model, the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained; and receiving an output from the machine learning model, the output indicating the location of the damping force.

[0012] In some implementations, the first range of frequencies includes ultrasonic frequencies.

[0013] In some implementations, the transducers are arranged in an array in the plane.

[0014] In some implementations, one of the transducers causing vibration of the panel is located in a center of the array of transducers.

[0015] In some implementations, the array includes a two-dimensional array, and determining the location of the damping force applied to the panel includes determining a two-dimensional coordinate in the plane that corresponds to the location of the damping force.

[0016] In some implementations, the one or more transducers causing vibration of the panel is also one of the transducers obtaining the amplitude data.

[0017] In some implementations, the damping force is caused by a touch of a user’s body member on a surface of the panel.

[0018] In some implementations, the method includes energizing at least one of the transducers to cause vibration of the panel within a second range of frequencies different from the first range of frequencies.

[0019] In some implementations, the second range of frequencies includes audible frequencies.

[0020] In some implementations, the first range of frequencies includes ultrasound frequencies.

[0021] Another innovative aspect of the subject matter described in this specification can be embodied in a device including a device housing defining a space for accommodating one or more electronic components, the device housing including a panel having a surface facing away from the space; and an electronic control module accommodated in the space and coupled to the panel, the electronic control module being programmed to perform the actions of the methods.

[0022] The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. In some implementations, the output signal causes the device to perform one or more actions. The one or more actions can include at least one of moving an image displayed on the panel, updating a user interface displayed on the panel, or adjusting a characteristic of audio produced by the panel.

[0023] In some implementations, the panel includes a glass material. In some implementations, the panel is a curved panel. In some implementations, the panel is a display panel.

[0024] Another innovative aspect of the subject matter described in this specification can be embodied in a system including a panel extending in a plane, a plurality of transducers, each of the plurality of transducers being coupled to the panel at a corresponding location in the plane; and an electronic control module configured to perform the actions of the methods.

[0025] Among other advantages, implementations of the device described herein may enable a reduction or removal of a bezel around the panel, as typically bezels are used to cover areas of a panel where speakers are located. The techniques described can also reduce the size and cost of a touch-sensitive panel by reducing or eliminating the need for a dedicated touch sensor, such as a capacitive sensor.

[0026] The details of one or more implementations of the subject matter of this disclosure are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0027] FIG. 1 shows an example computing device with a touch sensitive audio surface. [0028] FIG. 2A illustrates a perspective view of a transmitting transducer coupled to a panel. FIG. 2B illustrates an overhead view of an excitation response of the panel caused by the transmitting transducer.

[0029] FIG. 2C illustrates a perspective view of a touch applied to the panel. FIG. 2D illustrates an overhead view of a damping response of the panel caused by the touch. [0030] FIG. 2E illustrates a perspective view of the transmitting transducer coupled to the panel and the touch applied to the panel. FIG. 2F illustrates an overhead view of a combined response of the panel caused by the touch and the transmitting transducer.

[0031] FIG. 3 A illustrates a perspective view of a touch sensitive panel with an array of receiving transducers.

[0032] FIG. 3B illustrates a perspective view of a grid of pixels corresponding to the array of receiving transducers.

[0033] FIGS. 4A-4C illustrate a process for determining an estimated position of a damping force based on measuring a combined response of a panel.

[0034] FIG. 5 is a flow chart of an example process for estimating a location of a touch on a touch sensitive audio surface.

[0035] FIG. 6 is a perspective view of an embodiment of a mobile device.

[0036] FIG. 7 is a schematic cross-sectional view of the mobile device of FIG 6.

[0037] FIG. 8 is a schematic diagram of an embodiment of an electronic control module for a mobile device.

[0038] Like reference numbers and designations in the various drawings indicate like elements.

DETAILED DESCRIPTION

[0039] FIG. 1 shows an example computing device 100 with a touch sensitive surface 120. In this example, the computing device 100 is a mobile telephone and the touch sensitive surface 120 is a display panel of the computing device 100. The touch sensitive surface 120 can sense contact by a user. For example, FIG. 1 illustrates an individual contacting the touch sensitive surface 120 with finger 110. The touch sensitive surface 120 can be configured to receive a touch input and generate an output signal, e.g., a touch signal, based on the location of the touch. The touch sensitive surface 120 can also generate the touch signal based on other characteristics of the touch, e.g., an amount of pressure of the touch, a number of touches, a path of a touch swipe, etc. An electronic control module of the computing device 100 can be configured to perform an action based on the touch signal. For example, based on the touch signal, the electronic control module may perform an action such as illuminating the display, opening an application, or scrolling the content displayed on the computing device 100.

[0001] Though the example touch sensitive surface 120 is shown as a display panel, in some examples the touch sensitive surface 120 might not have display capabilities. For example, the touch sensitive surface 120 can be a panel of a smart flat panel speaker that may or may not have a display. Based on the touch signal, the electronic control module may perform an action such as starting or stopping music or adjusting the volume of sound produced by the speaker.

[0040] While the computing device 100 includes a single touch sensitive surface 120, in general, the computing device 100 may have multiple touch sensitive surfaces. For example, a top half of the computing device may have a first touch sensitive surface and the bottom half of the computing device may have a second touch sensitive surface. As another example, the rear side of the computing device may have a first touch sensitive surface and the side surfaces of the computing device may have a second touch sensitive surface. While the computing device 100 includes a flat touch sensitive surface 120, in some examples the touch sensitive surface 120 may be curved.

[0041] FIG. 2A illustrates a perspective view of a transmitting transducer 210 coupled to a panel 202. The panel 202 may be, for example, a touch sensitive surface of a computing device such as the touch sensitive surface 120 of the computing device 100. The panel 202 extends in a plane, e.g., an x-y plane. The transmitting transducer 210 is coupled to the panel 202 at a first location in the plane. The transmitting transducer 210 is coupled to the panel underneath the panel, e.g., on a surface of the panel that is opposite the surface that receives a touch. For example, the transmitting transducer 210 may be coupled to a surface of the panel that faces an interior of a device housing.

[0042] In the example of FIG. 2 A, the transmitting transducer 210 is coupled to the panel 202 at a location near the center of the panel 202. However, the transmitting transducer 210 can be positioned at other locations. For example, the transmitting transducer may be located at a position between the center of the panel 202 and an edge of the panel 202. In some examples, more than one transmitting transducer 210 may be coupled to the panel, each located at a different respective location.

[0043] The example panel 202 is flat and square-shaped. In some examples, the panel may be curved. In some examples, the panel may be formed in shapes other than a square. For example, the panel may be shaped as a rectangle, rounded rectangle, circle, ellipse, parallelogram, trapezoid, or any other appropriate shape. The transducers coupled to the panel 202 can be used to produce audio sounds and to detect touches on the panel 202.

[0044] FIG. 5 is a flow chart of an example process 500 for estimating a location of a touch on a touch sensitive audio surface such as the panel 202. The process 500 can be performed by an electronic control module of a touch sensitive audio system, e.g., the electronic control module of the computing device 100. The electronic control module can be in electrical communication with the transducers of the panel audio system.

[0045] The example process 500 includes energizing one or more transducers among a plurality of transducers to cause vibration of a panel within a first range of frequencies (510). The panel may extend in a plane, with each of the plurality of transducers being coupled to the panel at a corresponding location in the plane. For example, referring back to FIG. 2A, the transmitting transducer 210 energizes to cause vibration of the panel 202. The vibration of the panel 202 displaces the panel in a direction perpendicular to the plane of the panel. In the example shown in FIG. 2A, the vibration of the panel 202 causes displacement of the panel in a z-direction, perpendicular to the x-y plane of the panel 202.

[0046] The transmitting transducer 210 causes vibration of the panel 202 within a programmed first range of frequencies. In some examples, the first range of frequencies is an ultrasonic range of frequencies. For example, the panel may vibrate within a range of frequencies around 20 kHz.

[0047] By vibrating at frequencies in an ultrasonic frequency range, interference with audio produced by the panel can be reduced. For example, the panel may produce audible sound waves, e.g., below 10 kHz. The transmitting transducer 210 can vibrate the panel at frequencies around 20 kHz while the panel is producing the audible sound waves, without interfering with the production of the audible sound waves. In this way, the panel can produce audible sounds, such as music, while also detecting a touch on the panel using ultrasonic frequencies generated by the transmitting transducer 210.

[0048] In order to detect the vibrational amplitude of the panel within the first range of frequencies, e.g., the range of ultrasonic frequencies, the system may buffer up the audio in the receiving transducers and take the fast fourier transform to measure the net energy in the first frequency range. For example, the fast fourier transform can be used to measure the net energy of the various locations of the panel in the frequency range around 20 kHz. In this way, the energy or vibrational amplitude of the panel can be measured in the frequency range of interest, while minimizing interference by audible sound such as music or speech.

[0049] FIG. 2B illustrates an overhead view of an excitation response of the panel 202 caused by the transmitting transducer 210. The shaded regions of FIG. 2B illustrate the vibrational amplitudes of the panel 202. As shown in legend 200, darker shaded areas represent higher vibrational amplitudes, e.g., areas of the panel with greater displacement in the z-direction, while lighter shade areas represent lower vibrational amplitudes. FIG. 2B shows the vibrational amplitude pattern of the panel 202 when no damping forces are affecting vibration of the panel 202, e.g., when no touch is being applied to the surface of the panel 202.

[0050] As illustrated in FIG. 2B, the vibrational strength, or amplitude, of the panel surface is highest near the location of the transducer 210, e.g., near center region 212 of the panel 202. The vibrational amplitude of the panel surface decays radially outward from the location of the transducer 210. The vibrational amplitude of the panel can be modeled using an isotropic Gaussian vibrational model, as shown in Equation 1.

S(p) = eGllp-Psourcell 2 )/asource Equation 1

[0051] In Equation 1, S(p) represents an excitation response, e.g., a vibrational amplitude, at a variable point p in the plane of the panel 202 due to excitation by a source, e.g., the transmitting transducer 210. The term p SO wce represents a two-dimensional coordinate point of the transmitting transducer 210. The term dsource is a scaling factor that determines a radial fall-off rate of the vibrational amplitude. The term ||p — p source || is the Euclidean distance between the point p and the point psource, e.g., the length of a line segment connecting point p and point psource. In some examples, such as in FIGS. 2A and 2B, the source may be located at the center of the panel, and the origin of the coordinate system may also be defined as the center of the panel. In these examples, psource is located at an x-y coordinate of (0,0). As can be seen in Equation 1, a shorter distance p-p S ource between the transmitting transducer and the point p results in a greater vibrational amplitude at point p. A greater distance p-p S ource results in a lesser vibrational amplitude at point p.

[0052] FIG. 2C illustrates a perspective view of a touch applied to the panel. The touch is applied by a finger 250 to a touch location 220 on the panel 202. The touch results in a damping force being applied to the panel 202 at the touch location 220. A damping force has an effect of reducing or restricting oscillations of the panel 202. For example a damping force applied to the panel can reduce the magnitude of displacement of the panel 202 in the z- direction.

[0053] FIG. 2D illustrates an overhead view of a damping response of the panel caused by the touch. The damping force caused by the touch reduces vibrational amplitude near the touch location 220. The damping strength is high around the touch location 220 and decays radially. Thus, as illustrated in FIG. 2D, when the finger 250 touches the panel 202 at the touch location 220, the vibrational amplitude is lesser near the touch location 220 and increases radially outward from the touch location 220. The damping response of the panel can be modeled using a complementary isotropic Gaussian model, as shown in Equation 2. Equation 2

[0054] In Equation 2, D(p) represents a damping response at a variable point p in the plane of the panel 202 due to damping caused by a touch or press by the finger 250. The variable ptouch represents a two-dimensional coordinate point of the touch location 220. The term atouch is a scaling factor that determines a radial fall-off rate of the damping force. The term ||p — p t ouch II is the Euclidean distance between the point p and the point ptouch, e.g., the length of a line segment connecting point p and point ptouch. As can be seen in Equation 2, a shorter distance p-ptouch between the touch location and the point p results in a greater damping force at point p, and therefore a reduced vibrational amplitude. A greater distance p- ptouch results in a lesser damping force, at point p, and therefore a greater vibrational amplitude.

[0055] FIG. 2E illustrates a perspective view of the transmitting transducer coupled to the panel and the touch applied to the panel. The transducer 210 vibrates the panel 202 at ultrasonic frequencies, while the finger 250 touches the panel 202 at the touch location 220. [0056] FIG. 2F illustrates an overhead view, or map, of a combined response of the panel caused by both the touch and the transmitting transducer. The combined response can be modeled as an element- wise product Gaussian model, as shown in Equation 3.

C(p) = S p) ■ D(p) Equation 3

[0057] In Equation 3, C(p) represents a combined vibrational response of the panel 202 at a variable point p in the plane of the panel 202 due to excitation by a source as well as damping caused by a touch or press. The combined vibrational response C(p) is a product of the excitation response S(p) caused by the source, and the damping response D(p) caused by the touch.

[0058] As illustrated in FIG. 2F, the damping force applied to touch location 220 reduces amplitudes near the touch location 220. The combined response C(p) increases non-linearly outward from the touch location 220. The damping force also causes a shift in the panel location of the maximum amplitude. For example, in FIG. 2B, the panel location of maximum amplitude, shaded black, is near the center region 212 of the panel, e.g., the location of the panel where the transmitting transducer 210 is coupled to the panel 202. In contrast, in FIG. 2F, the panel location of maximum amplitude is an off-center location 222. The panel location of maximum amplitude shifted away from the center region 212 due to the damping force being applied at the touch location 220. [0059] FIG. 3A illustrates a perspective view of a touch sensitive panel with an array 300 of receiving transducers 310. The receiving transducers 310 are arranged in the array 300 in the plane of the panel 202. The receiving transducers 310 are mechanically coupled to a surface of the panel 202, e.g., underneath the panel 202.

[0060] In the example of FIG. 3 A, the array 300 includes a five-by-five square array of transducers 310. However, the array 300 can include any number of transducers 310 and can be arranged in various shapes and patterns. For example, the array 300 may include an array with dimensions of, e.g., eight-by-eight, ten-by-ten, eight-by-twelve, or ten-by -twenty transducers. In some examples, the array 300 can have a circular shape, a triangular shape, a trapezoidal shape, or a parallelogram shape.

[0061] In the example of FIG. 3A, the transmitting transducer 210 is located at the center of the array 300. However, the transmitting transducer 210 can be located at other positions of the array. For example, the transmitting transducer 210 can be located at a position of the array that is between the center of the array 300 and an edge of the array. In another example, the transmitting transducer 210 can be located at an edge position of the array 300.

[0062] Referring to FIG. 5, the example process 500 includes obtaining, from one or more of the plurality of transducers, amplitude data representing an amplitude of panel vibration within the first range of frequencies at the respective locations of each of the one or more transducers from which the amplitude data is obtained (512). For example, the receiving transducers 310 illustrated in FIG. 3 A can each function as a microphone that detects vibrations. The receiving transducers 310 detect and measure the amplitude of vibration of the panel 202 within the first range of frequencies produced by the transmitting transducer 210. Each receiving transducer 310 outputs data representing the amplitude of the vibration of the panel 202 at the location of the transducer.

[0063] In some examples, some or all of the transducers of the array 300 can function as both transmitting transducers and receiving transducers. For example, the transmitting transducer 210 may also function as a receiving transducer 310, and some or all of the receiving transducers 310 may also function as a transmitting transducer. That is, the transmitting transducer 210 and the receiving transducers 310 may be a same type of transducer, but may perform different functions.

[0064] Various types of transducers may be used in the array 300. An example transducer of the array 300 may be a single part transducer, e.g., a piezoelectric transducer that is capable of transmitting and receiving at the same time. In another example, a transducer of the array 300 may be a dual part transducer including a speaker component and a separate microphone component. The speaker component can be, for example, a coil driven or piezoelectric panel speaker. The microphone component can be, for example, a MEMS or ECM microphone. The speaker component can be used to transmit acoustic energy, and the microphone component can be used to receive acoustic energy.

[0065] In some examples, some or all of the transducers of the array 300 may be capable of only transmitting or only receiving. For example, some transducers of the array 300 may be coil driven or piezoelectric driven panel speakers for transmitting acoustic energy. Other transducers of the array 300 may be MEMS or ECM microphones for receiving acoustic energy.

[0066] In some examples, some or all of the transducers can function as audio speakers, producing audible sound. For example, some or all of the receiving transducers 310 can be energized to cause vibration of the panel within a second range of frequencies that is different from the first range of frequencies. The second range of frequencies may include audible frequencies. In some examples, the transmitting transducer 210 may cause vibration in a second range of audible frequencies as well as in the first range of ultrasonic frequencies. [0067] FIG. 3B illustrates a perspective view of a grid 302 of pixels corresponding to the array of receiving transducers. The grid extends in the plane of the panel in directions x and y. The grid includes one pixel per transducer coupled to the panel. Each pixel corresponds to a location of an individual receiving transducer.

[0068] FIGS. 4A-4C illustrate a process for determining an estimated position of a damping force based on measuring a combined response of a panel. FIG. 4A shows a response map 430, which is similar to the response shown in FIG. 2B. The response map 430 illustrates the vibrational response of the panel 202 caused only by the excitation by the transmitting transducer 210. That is, the response map 430 shows the vibrational response of the panel 202 without any touch applied to the panel 202.

[0069] As the receiving transducers 310 detect and measure vibration of the panel within the first range of frequencies produced by the transmitting transducer 210, the output of the receiving transducers 310 can be used to produce a sub-sampled image 440 of the response of the panel. The sub-sampled image 440 illustrates the amplitude data detected by each of the receiving transducers of the array 300. The amplitude data includes the amplitude of panel vibration within the first range of frequencies at the location of each of the receiving transducers 310.

[0070] To produce the sub-sampled image 440, the amplitude measured by each receiving transducer 310 can be mapped to a pixel location defined by the grid 302. For example, the amplitude data output from the receiving transducer 306 is mapped to the corresponding pixel 316 of the grid 302. The amplitude measured by the receiving transducer 306 is thus represented by the shade of the pixel 316.

[0071] In the example sub-sampled image 440, the amplitude of vibration measured by each receiving transducer is represented by a shade of the corresponding grid segment, or pixel, of the grid 302. Darker shaded pixels represent higher vibrational amplitudes, while lighter shade areas represent lower vibrational amplitudes.

[0072] FIG. 4B shows a combined response map 470, which is similar to the combined response shown in FIG. 2F. The combined response map 470 illustrates the combined vibrational response of the panel 202 caused by both the touch at touch location 220 and the excitation by the transmitting transducer 210. As described with reference to FIG. 2F, the touch by the finger 250 at the touch location 220 causes a damping force to be applied to the surface of the panel 202. The damping force reduces amplitudes near the touch location 220, and shifts the panel location with the maximum amplitude away from the location of the transmitting transducer.

[0073] As the receiving transducers 310 detect and measure vibration of the panel within the range of frequencies produced by the transmitting transducer 210, the output of the receiving transducers 310 can be used to produce a sub-sampled image 480 of the combined response of the panel. The sub-sampled image 480 illustrates the amplitude data detected by each of the receiving transducers of the array 300. The amplitude data includes the amplitude of panel vibration within the range of frequencies at the location of each of the receiving transducers 310.

[0074] To produce the sub-sampled image 480, the amplitude measured by each receiving transducer 310 can be mapped to a pixel location defined by the grid 302. For example, the amplitude data output from the receiving transducer 306 is mapped to the corresponding pixel 316 of the grid 302. The amplitude measured by the receiving transducer 306 is thus represented by the shade of the pixel 316.

[0075] In the example sub-sampled image 480, the amplitude of vibration measured by each receiving transducer is represented by a shade of the corresponding grid segment, or pixel, of the grid 302. Darker shaded pixels represent higher vibrational amplitudes, while lighter shade areas represent lower vibrational amplitudes.

[0076] Referring to FIG. 5, the example process 500 includes determining, based on the amplitude data representing the amplitude of panel vibration within the first range of frequencies at the location of each of the plurality of transducers from which the amplitude data is obtained, a location of a damping force applied to the panel (514). The location may be a location in the plane in which the panel extends. The amplitude data, as illustrated in the sub-sampled images 440, is output to a mapping model 450. The mapping model 450 determines an estimated location of the damping force, if any, that is applied to the panel based on the amplitude data 340. The mapping model 450 is described in greater detail with reference to FIG. 4.

[0077] FIG. 4 A illustrates an output 460 from the mapping model 450 when no touch is applied to the panel 202. Since the sub-sampled image 440 indicated no distortion in the isotropic vibration of the panel, the output 460 outputs no estimated touch position. The vibration of the panel within the range of frequencies of the transmitting transducer is caused solely by the excitation of the transmitting transducer. Thus, the mapping model 450 determines that no touch is applied to the panel.

[0078] FIG. 4B illustrates an output 490 from the mapping model 450 when a touch is applied to the panel 202. The output 490 includes an estimated touch location 465 on the panel 202. The estimated touch location 465 is determined by the mapping model 450 based on the amplitude data, as represented by the sub-sampled image 480. The estimated touch location 465 is an estimated position of the damping force applied by the finger 250 to the panel 202. In some examples, the estimated touch location 465 can be defined by a two- dimensional coordinate location, e.g., an x-y coordinate location mapped onto the panel 202. [0079] An estimation error can be defined as a measurement between the estimated touch location 465 and the touch location 220. In some examples, the error may be measured as a scalar distance between the estimated touch location 465 and the touch location 220. In some examples, the error may be measured as a vector between the touch location 220 and the estimated touch location 465. In some examples, the error may be measured as an offset in the x-direction and an offset in the y-direction between the touch location 220 and the estimated touch location 465. The touch-sensitive audio system can accurately detect the location of the touch. For example, for a square audio surface of dimensions 30cm by 30cm, the error measured as a scalar distance between the estimated touch location and the touch location may be six millimeters or less.

[0080] Although FIG. 4B shows a process for estimating a single touch location 465, the process illustrated can also be used to estimate a touch path on the panel 202. For example, a user may swipe a path along the surface of the panel. The receiving transducers 310 can measure time-varying vibrational amplitude of the panel during the swipe. The receiving transducers 310 then output the time-varying amplitude, thereby generating a series of sub- sampled images. Each sub-sampled image can be provided to the mapping model 450. The mapping model 450 can then output a series of estimated touch locations. Thus, the output of the mapping model 450 can include a path of the swipe on the panel. The output of the mapping model 450 can include the direction and speed of the swipe.

[0081] Although FIG. 4B shows a process for generating a single estimated touch location 465, the process illustrated can also be used to estimate multiple touches on the panel 202. For example, the user may touch the panel 202 with two fingers, each finger at a different location on the panel. The mapping model 450 can therefore output two estimated touch locations, each estimated touch location corresponding to one of the two fingers.

[0082] FIG. 4C illustrates an example process for estimating a location of the touch based on the measured combined response as represented by the sub-sampled image 480. The process can be performed by a mapping model, e.g., the mapping model 450.

[0083] In some examples, the mapping model 450 can include an inverse mapping function. The mapping model 450 can use the sub-sampled dataset, e.g., as illustrated in the subsampled image 480, to estimate a location of the touch location. The mapping model 450 can output coordinates, e.g., x-y coordinates, indicating the estimated location of the touch on the panel.

[0084] The mapping model 450 can be a machine learning model. For example, the mapping model 450 can be a convolutional neural network model that is customized to infer coordinates of a touch on a panel based on vibrational amplitude data. In some examples, the mapping model 450 may be described by approximately ten thousand parameters. The panel audio system can provide, to the machine learning model, the data representing the amplitude of panel vibration within the range of frequencies at the location of each of the one or more receiving transducers. The machine learning model can then generate an output indicating the estimated location of the damping force.

[0085] In some examples, the input to the mapping model 450 can include the sub-sampled image 480. For example, the input to the mapping model 450 can include data indicating the amplitude of panel vibration within the range of frequencies of the transmitting transducer, as measured by each receiving transducer of the array 300. The data can include the measured amplitude data, mapped to the pixel or grid location of the corresponding receiving transducer that measured the amplitude data.

[0086] The mapping model includes a normalizer 412. The normalizer 412 normalizes the amplitude data to make the amplitude data absolute-amplitude invariant. For example, the strength of a touch can cause the damping response D(p) to be altered by a scalar factor. The normalizer 412 normalizes the amplitude data to reduce effects of the strength of the touch. For example, the normalizer 412 may apply an LI -normalization on the amplitude data. [0087] The normalizer 412 outputs the normalized amplitude data to a convolution layer 414. The convolution layer 414 applies a filter to the amplitude data and generates a feature map of the amplitude data. The feature map summarizes the presence of features in the amplitude data and predicts the class to which each feature belongs.

[0088] The convolution layer 414 outputs the feature maps to a pooling layer 416. The pooling layer 416 down samples the feature maps, e.g., using average pooling or maximum pooling. The pooling layer 415 operates on each feature map, reducing the size of the feature map. In some examples, the mapping model 450 may use several rounds of convolution and pooling.

[0089] The pooling layer 416 outputs a pooled feature map to a fully-connected layer 418. In some examples, the mapping model 450 can be a double-fully-connected layer, including two fully -connected layers. The fully-connected layer 418, or layers, transitions the pooled feature map to a vector and applies weights to predict the correct output.

[0090] The fully -connected layer 418 outputs the vector to the output layer 420. The output layer generates an output. For example, the output layer 420 may output a two-dimensional coordinate location (x,y) of an estimated touch location 465 on the panel.

[0091] The mapping model 450 can be trained using a series of calibrated robotic measurements that are programmed with a ground truth press for all coordinates in the 2D grid mapped to the panel. For example, a touch, or press, may be applied to a known location on the panel having coordinates (j,k). The amplitude data resulting from the press can be provided to the mapping model 450. The mapping model can then output an estimated location (x,y) of the press. The output of the mapping model 450 can then be compared to the known location (j ,k) of the press. Parameters of the mapping model 450 can then be updated based on the accuracy of the estimated location of the press compared to the known location of the press.

[0092] The mapping model 450 can also be trained for more complex touch patterns. For example, the mapping model 450 may be trained to perform inverse mapping for multiple touches, e.g., two or three touches at different locations of the panel.

[0093] The mapping model 450 can be calibrated for a particular panel design. For example, the mapping model can be calibrated for the panel 202, with a five-by-five transducer array. The mapping model can be calibrated for panels and arrays of various shapes, sizes, orientations, and patterns. In some examples, a mapping model 450 that is calibrated for an initial panel design may be reused, or partially reused, for a different panel design. For example, the mapping model for the different panel design may reuse the normalizer, the convolution layer, and the pool layer of the initial panel design. The fully- connected layers may be retrained and updated for the different panel design. In this way, model parameters may need to be retrained for only the fully -connected layer. This reduces the number of model parameters to be updated for the new design. Thus, the mapping model can use transfer learning in order to reduce the amount of training required for new panel models and design. Mapping models for new panels can be trained using less training data compared to the amount of training data required to train the original panel model.

[0094] Referring to FIG. 5, the example process 500 includes generating an output signal based on the location of the damping force applied to the panel (516). The output signal can cause the device to perform one or more actions. For example, the actions may include moving an image presented on the panel, updating a user interface presented on the panel, or adjusting a characteristic of audio produced by the panel. For example, the actions may include changing the volume of audio produced by the panel. The actions can also include starting or stopping video displayed on the panel, zooming to increase or decrease the size of images displayed on the panel, starting or stopping music being played by the panel, etc. In this way, a user can interact directly with the panel by touching the panel, e.g., with a finger or a stylus.

[0095] In general, the transducers, or actuators, described in this disclosure can be used in a variety of applications. For example, in some embodiments, the actuators can be used to drive a panel of a panel audio loudspeaker, such as a distributed mode loudspeaker (DML). Such loudspeakers can be integrated into a device, such as a mobile phone, a tablet, a television, or a flat panel speaker. For example, referring to FIG. 6, a mobile device 700 includes a device chassis 702 and a touch panel 704. The example mobile device 700 includes a flat panel display (e.g., an OLED or LCD display panel) that integrates a panel audio loudspeaker. Mobile device 700 interfaces with a user in a variety of ways, including by displaying images and receiving touch input via touch panel 704. Typically, a mobile device has a depth (in the z-direction) of approximately 10 mm or less, a width (in the x-direction) of 60 mm to 80 mm (e.g., 68 mm to 72 mm), and a height (in the y-direction) of 100 mm to 160 mm (e.g., 138 mm to 144 mm).

[0096] Mobile device 700 also produces audio output. The audio output is generated using a panel audio loudspeaker that creates sound by causing the flat panel to vibrate. The panel is coupled to an actuator, such as a distributed mode actuator, or DMA. The actuator is a movable component arranged to provide a force to a panel, such as touch panel 704, causing the panel to vibrate. The vibrating panel generates human-audible sound waves, e.g., in the range of 20 Hz to 20 kHz.

[0097] In addition to producing sound output, mobile device 700 can also produce haptic output using the actuator. For example, the haptic output can correspond to vibrations in the range of 180 Hz to 300 Hz.

[0098] FIG. 6 also shows a dashed line that corresponds to the cross-sectional direction shown in FIG. 7. Referring to FIG. 7, a cross-section of mobile device 700 illustrates device chassis 702 and touch panel 704. Device chassis 702 has a depth measured along the z- direction and a width measured along the x-direction. Device chassis 702 also has a back panel, which is formed by the portion of device chassis 702 that extends primarily in the x-y plane. Mobile device 700 includes actuator module 800, which is housed behind panel 704 in chassis 702 and attached to the back side of panel 704. For example, a PSA can attach actuator module 800 to panel 704. Generally, actuator module 800 is sized to fit within a volume constrained by other components housed in the chassis, including an electronic control module 820 and a battery 815.

[0099] In general, the disclosed actuators are controlled by an electronic control module, e.g., electronic control module 820. In general, electronic control modules are composed of one or more electronic components that receive input from one or more sensors and/or signal receivers of the mobile phone, process the input, and generate and deliver an output based on the input.

[0100] Referring to FIG. 8, an exemplary electronic control module 820 of a mobile device, such as mobile device 700, includes a processor 810, memory 870, a display driver 830, a signal generator 840, an input/output (I/O) module 850, and a network/communications module 860. These components are in electrical communication with one another (e.g., via a signal bus 802) and with actuator module 800.

[0101] Processor 810 may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, processor 810 can be a microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), or combinations of such devices.

[0102] Memory 870 has various instructions, computer programs or other data stored thereon. The instructions or computer programs may be configured to perform one or more of the operations or functions described with respect to the mobile device. For example, the instructions may be configured to control or coordinate the operation of the device’s display via display driver 830, signal generator 840, one or more components of I/O module 850, one or more communication channels accessible via network/communications module 860, one or more sensors (e.g., biometric sensors, temperature sensors, accelerometers, optical sensors, barometric sensors, moisture sensors and so on), and/or actuator module 800.

[0103] Signal generator 840 is configured to produce AC waveforms of varying amplitudes, frequency, and/or pulse profiles suitable for actuator module 800 and producing acoustic and/or haptic responses via the actuator. Although depicted as a separate component, in some embodiments, signal generator 840 can be part of processor 810. In some embodiments, signal generator 840 can include an amplifier, e.g., as an integral or separate component thereof.

[0104] Memory 870 can store electronic data that can be used by the mobile device. For example, memory 870 can store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing and control signals or data for the various modules, data structures or databases, and so on. Memory 870 may also store instructions for recreating the various types of waveforms that may be used by signal generator 840 to generate signals for actuator module 800. Memory 870 may be any type of memory such as, for example, random access memory, read-only memory, Flash memory, removable memory, or other types of storage elements, or combinations of such devices.

[0105] As briefly discussed above, electronic control module 820 may include various input and output components represented in FIG. 8 as I/O module 850. Although the components of I/O module 850 are represented as a single item in FIG. 8, the mobile device may include a number of different input components, including buttons, microphones, switches, and dials for accepting user input. In some embodiments, the components of I/O module 850 may include one or more touch sensor and/or force sensors. For example, the mobile device’s display may include one or more touch sensors and/or one or more force sensors that enable a user to provide input to the mobile device.

[0106] Each of the components of I/O module 850 may include specialized circuitry for generating signals or data. In some cases, the components may produce or provide feedback for application-specific input that corresponds to a prompt or user interface object presented on the display.

[0107] As noted above, network/communications module 860 includes one or more communication channels. These communication channels can include one or more wireless interfaces that provide communications between processor 810 and an external device or other electronic device. In general, the communication channels may be configured to transmit and receive data and/or signals that may be interpreted by instructions executed on processor 810. In some cases, the external device is part of an external communication network that is configured to exchange data with other devices. Generally, the wireless interface may include, without limitation, radio frequency, optical, acoustic, and/or magnetic signals and may be configured to operate over a wireless interface or protocol. Example wireless interfaces include radio frequency cellular interfaces, fiber optic interfaces, acoustic interfaces, Bluetooth interfaces, Near Field Communication interfaces, infrared interfaces, USB interfaces, Wi-Fi interfaces, TCP/IP interfaces, network communications interfaces, or any conventional communication interfaces.

[0108] In some implementations, one or more of the communication channels of network/communications module 860 may include a wireless communication channel between the mobile device and another device, such as another mobile phone, tablet, computer, or the like. In some cases, output, audio output, haptic output or visual display elements may be transmitted directly to the other device for output. For example, an audible alert or visual warning may be transmitted from the mobile device 700 to a mobile phone for output on that device and vice versa. Similarly, the network/communications module 860 may be configured to receive input provided on another device to control the mobile device. For example, an audible alert, visual notification, or haptic alert (or instructions therefor) may be transmitted from the external device to the mobile device for presentation.

[0109] The actuator technology disclosed herein can be used in panel audio systems, e.g., designed to provide acoustic and / or haptic feedback. The panel may be a display system, for example based on OLED, LED, or LCD technology. The panel may be part of a smartphone, television, speaker, tablet computer, or wearable device (e.g., smartwatch or head- mounted device, such as smart glasses).

[0110] Some aspects of a device containing the piezoelectric touch sensitive device module described here can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. For example, in some implementations, the electronic control module 820 can be implemented using digital electronic circuitry, or in computer software, firmware, or hardware, or in combinations of one or more of them.

[0111] The term “electronic control module” encompasses all kinds of apparatus, devices, and machines for processing data and/or control signal generation, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.

[0112] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

[0113] Some of the processes described above can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

[0114] A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. Accordingly, other embodiments are within the scope of the claims.