Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE WITH SPEAKER AND IMAGE SENSOR
Document Type and Number:
WIPO Patent Application WO/2023/249910
Kind Code:
A1
Abstract:
In one implementation, a method of playing audio data is performed at a device including a frame configured for insertion into an outer ear, a speaker coupled to the frame, an image sensor coupled to the frame, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, one or more images of a physical environment. The method includes generating audio data based on the one or more images of the physical environment. The method includes playing, via the speaker, the audio data.

Inventors:
MILLER BRETT D (US)
BOOTHE DANIEL K (US)
JOHNSON MARTIN E (US)
Application Number:
PCT/US2023/025675
Publication Date:
December 28, 2023
Filing Date:
June 19, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPLE INC (US)
International Classes:
H04R1/10; G06F1/16; G06F3/01
Domestic Patent References:
WO2022036643A12022-02-24
Foreign References:
US20120235883A12012-09-20
US11356796B22022-06-07
Attorney, Agent or Firm:
HIGLEY, William J. (US)
Download PDF:
Claims:
What is claimed is:

1. A method comprising: at a device including a frame configured for insertion into an outer ear, a speaker coupled to the frame, an image sensor coupled to the frame, one or more processors, and non- transitory memory: capturing, using the image sensor, one or more images of a physical environment; generating audio data based on the one or more images of the physical environment; and playing, via the speaker, the audio data.

2. The method of claim 1 , wherein generating the audio data based on the one or more images of the physical environment includes transmitting, to a peripheral device, the one or more images of the physical environment and receiving, from the peripheral device, the audio data.

3. The method of claim 1 or 2, wherein generating the audio data includes creating an audio signal.

4. The method of claim 1 or 2, wherein generating the audio data includes altering an audio stream.

5. The method of any of claims 1-4, wherein the device further includes a microphone configured to generate ambient sound data and wherein generating the audio data is further based on the ambient sound data.

6. The method of claim 5, wherein the ambient sound data includes a vocal input.

7. The method of any of claims 1-4, wherein the device further includes a microphone configured to generate ambient sound data and wherein generating the audio data is independent of the ambient sound data.

8. The method of any of claims 1-7, wherein the device further includes an inertial measurement unit (1MU) configured to generate pose data and wherein generating the audio data is further based on the pose data.

9. The method of any of claims 1-8, wherein the audio data is played spatially from a location based on the one or more images of the environment.

10. The method of any of claims 1-9, wherein the image sensor has a device field-of-view different than a user field-of-view and the audio data is based on portions of the one or more images of the physical environment outside the user field-of-view.

11. A device comprising: a frame configured for insertion into an outer ear; one or more processors coupled to the frame; a speaker coupled to the frame and configured to output sound based on audio data received from the one or more processors; and an image sensor coupled to the frame and configured to provide one or more images of the physical environment to the one or more processors, wherein the one or more processors are configured to generate the audio data based on the one or more images of the physical environment.

12. The device of claim 1, wherein the one or more processors are configured to generate the audio data based on the one or more images of the environment by transmitting, to a peripheral device, the one or more images of the physical environment and receiving, from the peripheral device, the audio data.

13. The device of claim 11 or 12, wherein the one or more processors are configured to generate the audio data by creating an audio signal.

14. The device of claim 11 or 12, wherein the one or more processors are configured to generate the audio data by altering an audio stream.

15. The device of any of claims 11-14, further comprising a microphone configured to generate ambient sound data, wherein the one or more processors are configured to generate the audio data further based on the ambient sound data.

16. The device of claim 15, wherein the ambient sound data includes a vocal input.

17. The device of any of claims 11-14, further comprising a microphone configured to generate ambient sound data, wherein the one or more processors are configured to generate the audio data independent of the ambient sound data.

18. The device of any of claims 11-17, further comprising an inertial measurement unit (IMU) configured to generate pose data, wherein the one or more processors are configured to generate the audio data further based on the pose data.

19. The device of any of claims 11-18, wherein the audio data is played spatially from a location based on the one or more images of the environment.

20. The device of any of claims 11-19, wherein the image sensor has a device field-of-view different than a user field-of-view and the audio data is based on portions of the one or more images of the physical environment outside the user field-of-view.

21. The device of any of claims 11-20, wherein the image sensor includes a fisheye lens.

22. The device of any of claims 11-21, wherein the frame is not physically coupled to a display.

22. A device comprising: a frame configured for insertion into an outer ear; a speaker coupled to the frame; an image sensor coupled to the frame; one or more processors; non-transitory memory; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to perform any of the methods of claims 1-10.

23. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device including a frame configured for insertion into an outer ear, a speaker coupled to the frame, and an image sensor coupled to the frame cause the device to perform any of the methods of claims 1-10.

23. A device comprising: a frame configured for insertion into an outer ear; a speaker coupled to the frame; an image sensor coupled to the frame; one or more processors; a non-transitory memory; and means for causing the device to perform any of the methods of claims 1-10.

Description:
DEVICE WITH SPEAKER AND IMAGE SENSOR

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent No. 63/354,018, filed on June 21, 2022, which is hereby incorporated by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure generally relates to devices including one or more speakers and one or more image sensors.

BACKGROUND

[0003] Various ear-mounted devices, such as earphones or earbuds, include a speaker which outputs sound to a user. Various head-mounted devices, such as headphones or extended reality (XR) headsets, may similarly include a speaker.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.

[0005] Figure 1 is a perspective view of a head-mounted device in accordance with some implementations.

[0006] Figure 2 is a block diagram of an example operating environment in accordance with some implementations.

[0007] Figure 3 illustrates various field-of-views in accordance with some implementations.

[0008] Figure 4 is a flowchart representation of a method of playing audio data in accordance with some implementations.

[0009] In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. SUMMARY

[0010] Various implementations disclosed herein include devices, systems, and methods for playing audio data. In various implementations, the method is performed by a device including a frame configured for insertion into an outer ear, a speaker coupled to the frame, an image sensor coupled to the frame, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, one or more images of a physical environment. The method includes generating audio data based on the one or more images of the physical environment. The method includes playing, via the speaker, the audio data.

[0011] In accordance with some implementations, a device includes a frame configured for insertion into an outer ear. The device includes one or more processors coupled to the frame. The device includes a speaker coupled to the frame and configured to output sound based on audio data received from the one or more processors. The device includes an image sensor coupled to the frame and configured to provide one or more images of the physical environment to the one or more processors. The one or more processors are configured to generate the audio data based on the one or more images of the physical environment.

[0012] In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.

DESCRIPTION

[0013] Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.

[0014] Various ear-mounted devices, such as earphones or earbuds, include a speaker which outputs sound to a user. Various head-mounted devices, such as headphones or extended reality (XR) headsets, may similarly include a speaker. By including an image sensor on such devices to capture images of a physical environment and outputting audio based on the captured images, various user experiences can be enabled.

[0015] Figure 1 illustrates a perspective view of a head-mounted device 150 in accordance with some implementations. The head-mounted device 150 includes a frame 151 including two earpieces 152 each configured to abut a respective outer ear of a user. The frame 151 further includes a front component 154 configured to reside in front of a field-of-view of the user. Each earpiece 152 includes an inward-facing speaker 160 (e.g., inward-facing, outward-facing, downward-facing, or the like) and an outward-facing imaging system 170. Further, the front component 154 includes a display 181 to display images to the user, an eye tracker 182 (which may include one or more rearward-facing image sensors configured to capture images of at least one eye of the user) to determine a gaze direction or point-of-regard of the user, and a scene tracker 190 (which may include one or more forward-facing image sensors configured to capture images of the physical environment) which may supplement the imaging systems 170 of the earpieces 152.

[0016] In various implementations, the head- mounted device 150 lacks the front component 154. Thus, in various implementations, the head-mounted device is embodied as a headphone device including a frame 151 with two earpieces 152 each configured to surround a respective outer ear of a user and a headband coupling the earpieces 152 and configured to rest on the top of the head of the user. In various implementations, each earpiece 152 includes an inward-facing speaker 160 and an outward-facing imaging system 170.

[0017] In various implementations, the headphone device lacks a headband. Thus, in various implementations, the head-mounted device 150 (or the earpieces 150 thereof) is embodied as one or more earbuds or earphones. For example, an earbud includes a frame configured for insertion into an outer ear. In particular, in various implementations, the frame is configured for insertion into the outer ear of a human, a person, and/or a user of the earbud. The earbud includes, coupled to the frame, a speaker 160 configured to output sound, and an imaging system 170 configured to capture one or more images of a physical environment in which the earbud is present. In various implementations, the imaging system 170 includes one or more cameras (or image sensors). The earbud further includes, coupled to the frame, one or more processors. The speaker 160 is configured to output sound based on audio data received from the one or more processors and the imaging system 170 is configured to provide image data to the one or more processors. In various implementations, the audio data provided to the speaker 160 is based on the image data obtained from the imaging system 170.

[0018] As noted above, in various implementations an earbud includes a frame configured for insertion into an outer ear. In particular, in various implementations, the frame is sized and/or shaped for insertion into the outer ear. The frame includes a surface that rests in the intertragic notch, preventing the earbud from falling downward vertically. Further, the frame includes a surface that abuts the tragus and the anti-tragus, holding the earbud in place horizontally. As inserted, the speaker 160 of the earbud is pointed toward the ear canal and the imaging system 170 of the earbud is pointed outward and exposed to the physical environment.

[0019] Whereas the head-mounted device 150 is an example device that may perform one or more of the methods described herein, it should be appreciated that other wearable devices having one or more speakers and one or more cameras can also be used to perform the methods. The wearable audio devices may be embodied in other wired or wireless form factors, such as head-mounted devices, in-ear devices, circumaural devices, supra-aural devices, open- back devices, closed-back devices, bone conduction devices, or other audio devices.

[0020] Figure 2 is a block diagram of an operating environment 20 in accordance with some implementations. The operating environment 20 includes an earpiece 200. In various implementations, the earpiece 200 corresponds to the earpiece 152 of Figure 1. The earpiece 200 includes a frame 201. In various implementations, the frame 201 is configured for insertion into an outer ear. The earpiece 200 includes, coupled to the frame 201 and, in various implementations, within the frame 201, one or more processors 210. The earpiece 200 includes, coupled to the frame 201 and, in various implementations, within the frame 201, memory 220 (e.g., non-transitory memory) coupled to the one or more processors 210.

[0021] The earpiece 200 includes a speaker 230 coupled to the frame 201 and configured to output sound based on audio data received from the one or more processors 210. The earpiece 200 includes an imaging system 240 coupled to the frame 201 and configured to capture images of a physical environment in which the earpiece 200 is present and provide image data representative of the images to the one or more processors 210. In various implementations, the imaging system 240 includes one or more cameras 241 A, 241B. In various implementations, different cameras 241A, 241B have a different field-of-view. For example, in various implementations, the imaging system 240 includes a forward-facing camera and a rearward-facing camera. In various implementations, at least one of the cameras 241A includes a fisheye lens 242, e.g., to increase a size of the field-of-view of the camera 241A. In various implementations, the imaging system 240 includes a depth sensor 243. Thus, in various implementations, the image data includes, for each of a plurality of pixels representing a location in the physical environment, a color (or grayscale) value of the location representative of the amount and/or wavelength of light detected at the location and a depth value representative of a distance from the earpiece 200 to the location.

[0022] In various implementations, the earpiece 200 includes a microphone 250 coupled to the frame 201 and configured to generate ambient sound data indicative of sound in the physical environment. In various implementations, the earpiece 200 includes an inertial measurement unit (IMU) 260 coupled to the frame 201 and configured to determine movement and/or the orientation of the earpiece 200. In various implementations, the IMU 260 includes one or more accelerometers and/or one or more gyroscopes. In various implementations, the earpiece 200 includes a communications interface 270 coupled to frame configured to transmit and receive data from other devices. In various implementations, the communications interface 270 is a wireless communications interface.

[0023] The earpiece 200 includes, within the frame 201, one or more communication buses 204 for interconnecting the various components described above and/or additional components of the earpiece 200 which may be included.

[0024] In various implementations, the operating environment 20 includes a second earpiece 280 which may include any or all of the components of the earpiece 200. In various implementations, the frame 201 of the earpiece 200 is configured for insertion in one outer ear of a user and the frame of the second earpiece 200 is configured for insertion in another outer ear of the user, e.g., by being a mirror version of the frame 201.

[0025] In various implementations, the operating environment 20 includes a controller device 290. In various implementations, the controller device 290 is a smartphone, tablet, laptop, desktop, set-top box, smart television, digital media player, or smart watch. The controller device 290 includes one or more processors 291 coupled to memory 292, a display 293, and a communications interface 294 via one or more communication buses 214. In various implementations, the controller device 290 includes additional components such as any or all of the components described above with respect to the earpiece 200.

[0026] In various implementations, the display 293 is configured to display images based on display data provided by the one or more processors 291. In contrast, in various implementations, the earpiece 200 (and, similarly, the second earpiece 280) does not include a display or, at least, does not include a display within a field-of-view of the user when inserted into the outer ear of the user.

[0027] In various implementations, the one or more processors 210 of the earpiece 200 generates the audio data provided to the speaker 230 based on the image data received from the imaging system 240. In various implementations, the one or more processors 210 of the earpiece 240 transmits the image data via the communications interface 270 to the controller device 290, the one or more processors of the controller device 290 generates the audio data based on the image data, and the earpiece 200 receives the audio data via the communications interface 270. In either set of implementations, the audio data is based on the image data.

[0028] Figure 3 illustrates various field-of-views in accordance with some implementations. A user field-of-view 301 of a user 30 typically extends approximately 300 degrees with varying degrees of visual perception within that range. For example, excluding far peripheral vision, the user field-of-view 301 is only approximately 120 degrees, and the user field-of-view 301 including only foveal vision (or central vision) is only approximately 5 degrees.

[0029] In contrast, a system (head-mounted device 150 of Figure 1) may have a device field-of-view that includes views outside the user field-of-view 301 of the user 30. For example, a system may include a forward-and-outward-facing camera including a fisheye lens with a field-of-view of 180 degrees proximate to each ear of the user 30 and may have a device forward field-of-view 302 of approximately 300 degrees. Further, a system may further include a rearward-and-outward-facing camera including a fisheye lens with a field-of-view of 180 degrees proximate to each ear of the user 30 and may also have a device rearward field-of-view 303 of approximately 300 degrees. In various implementations, a system including multiple cameras proximate to each ear of the user can have a device field-of-view of a full 360 degrees (e.g., including the device forward field-of-view 302 and the device rearward field-of-view 303). It is to be appreciated that, in various implementations, the cameras (or combination of cameras) may have smaller or larger fields-of-view than the examples above. [0030] The systems described above can perform a wide variety of functions. For example, in various implementations, while playing audio (e.g., music or an audiobook) via the speaker, in response to detecting a particular hand gesture (even a hand gesture performed outside a user field-of-view) in images captured by the imaging system, the system may alter playback of the audio (e.g., by pausing or changing the volume of the audio). For example, in various implementations, in response to detecting a hand gesture performed by a user proximate to the user’s ear of closing an open hand into a clenched first, the system pauses the playback of audio via the speaker.

[0031] As another example, in various implementations, while playing audio via the speaker, in response to detecting a person attempting to engage the user in conversation or otherwise talk to the user (even if the person is outside the user field-of-view) in images captured by the imaging system, the system may alter playback of the audio. For example, in various implementations, in response to detecting a person behind the user attempting to talk to the user, the system reduces the volume of the audio being played via the speaker and ceases performing an active noise cancellation algorithm.

[0032] As another example, in various implementations, in response to detecting an object or event of interest in the physical environment in images captured by the imaging system, the system generates an audio notification. For example, in various implementations, in response to detecting a person in the user’s periphery or outside the user field-of-view attempting to get the user’s attention (e.g., by waving the person’s arms), the device plays, via the speaker, an alert notification (e.g., a sound approximating a person saying “Hey!”). In various implementations, the system plays, via two or more speakers, the alert notification spatially such that the user perceives the alert notification as coming from the direction of the detected object.

[0033] As another example, in various implementations, in response to detecting an object or event of interest in the physical environment in images captured by the imaging system, the system stores, in the memory, an indication that the particular object was detected (which may be determined using images from the imaging system) in association with a location at which the object was detected (which may also be determined using images from the imaging system) and a time at which the object was detected. In response to a user query (e.g., a vocal query detected via the microphone), the system provides an audio response. For example, in response to detecting a water bottle in an office of the user, the system stores an indication that the water bottle was detected in the office and, in response to a user query at a later time of “Where is my water bottle?”, the device may generate audio approximating a person saying “In your office.”

[0034] As another example, in various implementations, in response to detecting an object in the physical environment approaching the user in images captured by the imaging system, the system generates an audio notification. For example, in various implementations, in response to detecting a car approaching the user at a speed exceeding a threshold, the system plays, via the speaker, an alert notification (e.g., a sound approximating the beep of a car horn). In various implementations, the system plays, via two or more speakers, the alert notification spatially such that the user perceives the alert notification as coming from the direction of the detected object.

[0035] Figure 4 is a flowchart representation of a method 400 of playing an audio notification in accordance with some implementations. In various implementations, the method 400 is performed by a device including one or more image sensors, one or more speakers, one or more processors, and non-transitory memory (e.g., the head-mounted device 150 of Figure 1 or the earpiece 200 of Figure 2). In various implementations, the method 400 is performed by a device include a frame configured for insertion into an outer ear, a speaker coupled to the frame, and an image sensor coupled to the frame. In various implementations, the method 400 is performed by a device without a display or by a device including a frame that is not physically coupled to a display. In various implementations, the method 400 is performed by a device with a display. In various implementations, the method 400 is performed using an audio device (e.g., the head- mounted device 150 of Figure 1 or the earpiece 200 of Figure 2) in conjunction with a peripheral device (e.g., controller device 290 of Figure 2). In various implementations, the method 400 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In various implementations, the method 400 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).

[0036] The method 400 begins, in block 410, with the device capturing, using the image sensor, one or more images of a physical environment. In various implementations, the image sensor has a device field-of-view different than a user field-of-view, at least at a respective one or more times at which the one or more images are captured and the frame is inserted into the outer ear. In various implementations, the image sensor includes a fisheye lens. Thus, in various implementations, the device field-of-view is between approximately 120 and 180 degrees, in particular, between approximately 170 and 180 degrees. [0037] The method 400 continues, in block 420, with the device generating audio data based on the one or more images of the physical environment. In various implementations, generating the audio data based on the one or more images of the physical environment includes transmitting, to a peripheral device, the one or more images of the environment and receiving, from the peripheral device, the audio data.

[0038] The method 400 continues, in block 430, with the device playing, via the speaker, the audio data.

[0039] Generating the audio data based on the one or more images of the physical environment (in block 420) encompasses a wide range of processing to enable various user experiences. For example, in various implementations, generating the audio data based on the one or more images of the physical environment includes detecting an object or event or interest in the physical environment and generating the audio data based on the detection. In various implementations, generating the audio data based on the detection includes creating an audio signal indicative of the detection. Thus, in various implementations, playing the audio data includes playing a new sound that would not have otherwise been played had the object or event of interest not been detected. In various implementations, playing the audio data includes playing sound when, had the object or event of interest not been detected, no sound would be played. For example, in response to detecting, e.g., using computer-vision techniques such as a model trained to detect and classify various objects, a snake as an object having an object type of “SNAKE”, the device generates an audio notification emulating the sound of a person saying an object type of the object or emulating the sound of the object, e.g., a rattlesnake rattle.

[0040] In various implementations, generating the audio data based on the detection includes altering an audio stream. For example, in response to detecting a particular hand gesture, the device pauses playback of the audio stream or changes the volume of the audio stream. As another example, in response to detecting a person attempting to communicate with the user, the device ceases performing active noise cancellation upon the audio stream.

[0041] In various implementations, the device further includes a microphone configured to generate ambient sound data and generating the audio data is further based on the ambient sound data. In various implementations, the ambient sound data includes a vocal input. For example, in response to detecting, in the one or more images of the physical environment, a user performing a hand gesture indicating an object in the physical environment having a particular object type (e.g., pointing at a lamp) and detecting, in the ambient sound data, the user issuing a vocal command to translate an object type of the object (e.g., “How do you say this in Spanish?”), the device generates audio data emulating the sound of a person saying a translation of the object type of the object (e.g., “la lampara”). As another example, in response to detecting, in the one or more images of the physical environment, a user brushing the user’s teeth and detecting, in the ambient sound data, the user issuing a vocal query at a later time regarding the detection (e.g., “Did I brush my teeth this morning?”), the device generates audio data emulating the sound of a person indicating the detection (e.g., “Yes, you brushed your teeth at 6:33 today ”)

[0042] As another example, in response to detecting a person attempting to communicate with the user based at least in part on the ambient sound data, the device pauses playback of an audio stream or reduces the volume of the audio stream.

[0043] In various implementations, generating the audio data is independent of the ambient sound data. For example, in response to detecting a moving object in the one or more images of the physical environment independent of the ambient sound data, the device generates an audio notification of the detection. In various implementations, the audio notification emulates the sound of a person indicating the detection of motion, e.g., “MOTION”. In various implementations, the audio notification emulates the sound of the object moving in the physical environment, e.g., the rustling of leaves or breaking of branches, which may be based on an object type of the moving object.

[0044] In various implementations, the device further includes an inertial measurement unit (IMU) configured to generate pose data and generating the audio data is further based on the pose data. For example, in response to detecting that a user has fallen based on the one or more images of the environment and the pose data, the device generates an audio query (“Are you okay?”). In various implementations, the audio data is played spatially from a location based on the one or more images of the environment, e.g., stereo panning or binaural rendering. For example, in response to detecting an object in the one or more images of the environment, the device plays an audio notification spatially so as to be perceived as being produced from the location of the detected object. In various implementations, the pose data is used to spatialize the audio data.

[0045] In various implementations, in order to play the audio spatially, the method 400 is performed in conjunction with a second device comprising a second frame configured for insertion into a second outer ear and a second speaker coupled to the second frame (e.g., the earpiece 280 of Figure 2).

[0046] As noted above, in various implementations, the image sensor has a device field-of-view different than a user field-of-view. In various implementations, the audio data is based on portions of the one or more images of the physical environment outside the user field- of-view. For example, in response to detecting a moving object (e.g., a vehicle) that is moving towards the device, otherwise referred to as an incoming object, in portions of the images of the physical environment outside the user field-of-view, the device generates an audio notification of the detection. In various implementations, the audio notification emulates the sound of a person indicating the detection of an incoming object, e.g., “INCOMING” or “LOOK OUT”. In various implementations, the audio notification emulates the sound of the object moving in the physical environment, e.g., a car horn or a bicycle bell, which may be based on an object type of the incoming object.

[0047] While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.

[0048] It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node” are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node.

[0049] The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[0050] As used herein, the term “if’ may be constmed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.