Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE AND SYSTEM FOR PROCESSING THE OUTPUT OF AN INSTRUMENT
Document Type and Number:
WIPO Patent Application WO/2024/009113
Kind Code:
A1
Abstract:
With reference to Figure 1 we provide a system for processing the output of a musical instrument, the system including: a sensor device providing a vibration sensor and an attachment mechanism for attaching the sensor device to the musical instrument, the vibration sensor outputting vibration data; a microphone configured to sense a sound produced when the instrument is played, the microphone outputting audio data; and a computing module configured to determine data indicative of the music being played based on the vibration data and the audio data.

Inventors:
POPRYAGA MYKHAILO (GB)
TERLETSKYY PAVLO (GB)
Application Number:
PCT/GB2023/051806
Publication Date:
January 11, 2024
Filing Date:
July 07, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GEISSEN LTD (GB)
International Classes:
G10H1/00
Foreign References:
US20180068646A12018-03-08
US20190272810A12019-09-05
EP3326169B12019-06-26
Attorney, Agent or Firm:
FORRESTERS IP LLP (GB)
Download PDF:
Claims:
CLAIMS

1. A system for processing the output of a musical instrument, the system including: a sensor device providing a vibration sensor and an attachment mechanism for attaching the sensor device to the musical instrument, the vibration sensor outputting vibration data; a microphone configured to sense a sound produced when the instrument is played, the microphone outputting audio data; and a computing module configured to determine data indicative of the music being played based on the vibration data and the audio data.

2. The system of claim 1 , wherein the data indicative of the music being played includes at least one of: a pitch of a note and a length of a note, a tempo of the music, a consistency of the tempo of the music, a consistency of the pitch of the note, a volume, a stylistic property of the music.

3. The system of any preceding claim, wherein the system includes multiple microphones.

4. The system of any preceding claim, wherein the sensor device includes the or a microphone.

5. The system of any preceding claim, wherein the computing module is configured to use digital signal processing to distinguish data relating to music from noise in the audio data.

6. The system of claim 5, wherein the computing module is configured to generate a feature vector of the data relating to music, including one or more of fundamental frequency, local autocorrelation peaks, number of zero crossings, variance and mean value.

7. The system of claim 6, wherein the computing module is configured to perform autocorrelation to construct the feature vector.

8. The system of claim 6 or claim 7, wherein the computing module is configured to determine data indicative of the music being played by processing the feature vector using a neural network.

9. The system of any one of claims 5 to 8, wherein the computing module is configured to process the audio data to distinguish a stream of data relating to music and a stream relating to percussive sounds.

10. The system of claim 9 in which the computing module is configured to implement cross correlation and a finite impulse response filter to split the audio data into the stream of data relating to music and the stream relating to percussive sounds.

11 . The system of claim 9 or claim 10, in which autocorrelation is performed with a set of matched filters to determine parameters of tap events associated with the stream of percussive sounds, the parameters including one or more of: time, energy, duration, tone.

12. The system of claim 11 , wherein the computing module is configured to determine data indicative of the music being played by processing the tap event parameters using a neural network.

13. The system of any preceding claim, wherein the computing module is provided by the sensor device.

14. The system of any preceding claim, further including a user device configured to receive the data indicative of the music being played from the sensor device, and to determine an output based on the received data.

15. The system of claim 14, wherein the output of the user device is determined based on a comparison of the received data with a predetermined data set associated with the music being played by the instrument, the output indicating a discrepancy between the output of the instrument and the predetermined data set.

16. The system of claim 14, wherein the output of the user device is determined based on the implementation of the received data as input to a computer game operating on the user device.

17. The system of any preceding claim, for processing the output of multiple musical instruments, the system including: for each instrument, an associated sensor device providing a respective vibration sensor, and an attachment mechanism for attaching the sensor device to the respective musical instrument; one or more microphones each configured to sense the sound produced when one or more of the instruments are played; and one or more computing modules configured to determine data indicative of the music being played by each of the instruments based on the sensor data and audio data.

18. A sensor device for processing the output of a musical instrument, the sensor device including: a body providing an attachment mechanism for attachment to the musical instrument, a vibration sensor for detecting vibration of a portion of the instrument and outputting vibration data; the sensor device being operable to provide data to a computing module configured to determine data indicative of the music being played based on the vibration data and audio data representing a sound sensed when the instrument is played.

19. A sensor device according to claim 18, wherein the sensor device provides a microphone configured to sense the sound produced when the instrument is played.

20. A sensor device according to claim 18 or claim 19, wherein the sensor device provides the computing module.

Description:
DEVICE AND SYSTEM FOR PROCESSING THE OUTPUT OF AN INSTRUMENT

FIELD

The present invention relates to a system, device and related methods for processing the sound of an instrument. Particularly but not exclusively, the invention relates to a device that is attachable to an instrument to detect and process sounds produced by the instrument, for use in interpreting the sound into data which may include musical notation and/or a digital representation of what is being played.

The output from the device may be used in association with tutoring applications, to determine whether the music played by the user correctly corresponds to a piece of music being performed. In addition, or alternatively, the output may be used in combination with a video game, as input to control an aspect of the game.

BACKGROUND

Musical learning consists of different specialities and includes music theory, sightreading, ear training, vocal music, instrumental music, song-writing, stage performance, psychology and so on. This complex discipline requires more effective methods and systems of teaching the playing of musical instruments. A challenge for teaching young children to play instruments is how to keep a child engaged while practising playing the instrument. To achieve a high standard of playing it is necessary to spend many hours practising, and the attention span of many children is short, and so the child may become disengaged.

In many areas of education, it well-established that interactive educational games provide a way to keep children engaged while learning.

Games have been proposed in which a game controller is adapted to have the appearance of an instrument, with input keys replacing the functionality required to play the instrument properly (such as a guitar, with keys spaced apart along the fretboard of the instrument to represent forming notes using the frets). However, the use of a specialised controller for playing a game does not equate to the student playing the actual instrument, since the techniques involved in playing the instrument will not necessarily translate to or from the game controller.

The need to address shortcomings in existing sound processing techniques has led to the emergence of new architectures (such as ‘Edge’). These architectures rely on the transfer of computing power as close as possible to the data source, rather than relying on cloud processing.

It is the aim of the present invention to address one or more of the deficiencies associated with current technologies.

BRIEF DESCRIPTION OF THE INVENTION

According to an aspect of the invention, we provide a system for processing the output of a musical instrument, the system including: a sensor device providing a vibration sensor and an attachment mechanism for attaching the sensor device to the musical instrument, the vibration sensor outputting vibration data; a microphone configured to sense a sound produced when the instrument is played, the microphone outputting audio data; and a computing module configured to determine data indicative of the music being played based on the vibration data and the audio data.

The data indicative of the music being played may include at least one of: a pitch of a note and a length of a note, a tempo of the music, a consistency of the tempo of the music, a consistency of the pitch of the note, a volume, a stylistic property of the music.

The system may include multiple microphones.

The sensor device may include the or a microphone. The computing module may be configured to use digital signal processing to distinguish data relating to music from noise in the audio data.

The computing module may be configured to generate a feature vector of the data relating to music, including one or more of fundamental frequency, local autocorrelation peaks, number of zero crossings, variance and mean value.

The computing module may be configured to perform autocorrelation to construct the feature vector.

The computing module may be configured to determine data indicative of the music being played by processing the feature vector using a neural network.

The computing module may be configured to process the audio data to distinguish a stream of data relating to music and a stream relating to percussive sounds.

The computing module may be configured to implement cross correlation and a finite impulse response filter to split the audio data into the stream of data relating to music and the stream relating to percussive sounds.

Autocorrelation may be performed with a set of matched filters to determine parameters of tap events associated with the stream of percussive sounds, the parameters including one or more of: time, energy, duration, tone.

The computing module may be configured to determine data indicative of the music being played by processing the tap event parameters using a neural network.

The computing module may be provided by the sensor device.

The system may further include a user device configured to receive the data indicative of the music being played from the sensor device, and to determine an output based on the received data. The output of the user device may be determined based on a comparison of the received data with a predetermined data set associated with the music being played by the instrument, the output indicating a discrepancy between the output of the instrument and the predetermined data set.

The output of the user device may be determined based on the implementation of the received data as input to a computer game operating on the user device.

The system may be suitable for processing the output of multiple musical instruments, the system including: for each instrument, an associated sensor device providing a respective vibration sensor, and an attachment mechanism for attaching the sensor device to the respective musical instrument; one or more microphones each configured to sense the sound produced when one or more of the instruments are played; and one or more computing modules configured to determine data indicative of the music being played by each of the instruments based on the sensor data and audio data.

According to a second aspect of the invention, we provide a sensor device for processing the output of a musical instrument, the sensor device including: a body providing an attachment mechanism for attachment to the musical instrument, a vibration sensor for detecting vibration of a portion of the instrument and outputting vibration data; the sensor device being operable to provide data to a computing module configured to determine data indicative of the music being played based on the vibration data and audio data representing a sound sensed when the instrument is played.

The sensor device may provide a microphone configured to sense the sound produced when the instrument is played. The sensor device may provide the computing module.

BRIEF DESCRIPTION OF THE FIGURES

In order that the present disclosure may be more readily understood, preferable embodiments thereof will now be described, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 is a diagram illustrating components of a music analysis system comprising a sound sensor (i.e. , microphone and vibration sensor) and a computing module for determining aspects of the music being played, receiving an input from a musical instrument (the input comprising an audio element and a physical vibration element), and subsequently the device outputting data to a user device;

Figures 2 and 3 illustrate the sensor device connected to an instrument, and a remotely operated user device;

Figure 4 is a diagram illustrating the components of a sensor device according to embodiments of the technology;

Figure 5 is a diagram illustrating the steps carried out by components of the music analysis system;

Figures 6 to 9 illustrate an example sensor device according to embodiments of the described technology;

Figure 10 is a diagram illustrating components of another sensor device according to embodiments of the technology, this version being used within a remote microphone; and

Figure 11 is a diagram illustrating a music analysis system comprising a sensor device and multiple microphones, a user device receiving information from the sensor device and microphones (either directly or via the sensor device), and an output device in connection with the user device.

DETAILED DESCRIPTION OF THE DISCLOSURE

In broad terms and with reference to the Figures, a music analysis system 10 is provided, which includes one or more microphones 14 adapted to sense a sound played by an instrument 1 , a vibration sensor 12 for detecting vibration of a portion of the instrument 1 to which it is connected and outputting vibration data, and a computing module 17 for analysing the output of the vibration sensor 12 and microphone(s) 14. Typically, a sensor device 2 provides the vibration sensor 12 and a microphone 14. By the microphone ‘sensing’ the sound, we mean that the microphone picks up sounds produced by the instrument while it is being played, converting the sound waves into electrical energy in the usual way.

In some embodiments of the technology, the sensor device 2 is connected to the instrument 1 , the sensor device 2 providing both the vibration sensor 12 and also a microphone 14. It is important that the microphone 14 is vibrationally damped (i.e., so that any vibration of the microphone transmitted via a surface to which it is connected is minimised).

In other embodiments, the sensor device 2 provides the vibration sensor 12, but the microphone 14 may be located remote from the sensor device 2 (i.e., not physically be part of the sensor device 2). In embodiments, multiple microphones 14 are provided, the audio outputs from which may be combined for processing, and one or more of the multiple microphones 14 may be provided by the sensor device 2. The sensor device 2 preferably also includes a communications module 18 for transmitting signals to a user device 3.

The system 10 as described provides a new interactive way of teaching the playing of musical instruments 1. The system 10 provides a module for recognising and analysing played music. It is used together with a musical instrument 1 to provide analysis of the way in which the instrument 1 is played, and/or as part of an interactive game.

It may also be used in a context in which multiple sensor devices 2 are connected to multiple instruments 1 , alongside one or more microphones 14.

By analysing the vibration of the or each instrument 1 and the sounds received by the microphone 14, we can determine which components of the sounds received are likely to have been made using the or each instrument 1. In this way, we can filter extraneous sounds, and focus on only the sounds generated by the instrument 1. This can be used to filter noise from the audio signals, for example, such as environmental noise.

This makes the device 2 suitable for use with groups of musicians, wherein each instrument 1 has an associated device 2 for receiving sounds made by the instrument 1 , and interpreting those sounds. It may be used to determine which of multiple instruments 1 is responsible for playing individual aspects of the audio being received - in the context of a duo playing, or a string quartet, orchestra, or group of recorder-players, for example.

In embodiments of the technology, the data regarding the sounds received and vibrations sensed is processed by the sensor device 2, to provide data indicative of the music being played by the instrument. This data may then be transmitted to the user device 3 via Bluetooth, for example (or any other suitable communications protocol).

In some embodiments, the user device 3 may also transmit data to the sensor device 2.

The data indicative of the music being played (referred to as “music data”) may be used to control an aspect of a game, or may be used to generate musical notation resembling the music being played, and/or may be used as an input to an application for tutoring a musician. The game and/or application may generate feedback to the user, based on the analysis of the notes being played, which may be displayed to the user via the user device 3 (i.e. , via a display screen on the device, and/or in audio form). Music data I ‘data indicative of the music being played’, may take the form of a digital record of the notes being played (including pitch of the note, length of the note, tempo of the music), a consistency of the tempo of the music, a consistency of the pitch, data indicating volume, data indicating stylistic properties of the music such as legato or staccato playing, pizzicato, glissando, or any other musical properties as may be used in documenting a musical performance as would be understood by the skilled person.

In more detail the described technology provides a music analysis system 10 which includes a sensor device 2 that is vibration sensitive, that effectively reads the vibrations off a musical instrument 1 to give an accurate and instant reading of the notes and dynamics being played. The system 10 can work with any instrument 1 and may be used by multiple instruments 1 at the same time.

The music analysis system 10 can be used to turn musical instruments 1 into game controllers, by detecting the notes being played and properties of the notes. These properties may include one or more of those mentioned above as music data, such as the volume dynamic, stylistic qualities such as whether the notes are played legato (smooth) or staccato, and whether the rhythm is strictly regular or off the beat, for example. Games can be played using the output of the sensor device 2 as an input to the game, where the pitch of the note and/or its dynamic volume, and/or any other detected properties, are received as inputs to control one or more aspects of the game.

In addition, or alternatively, the output from the sensor device 2 may be received by a tutorial application, for the purpose of tutoring the musician learning to play or practising their instrument. In this way feedback may be given to the user via text, images displayed (which may include one or more animations), or audio feedback.

For example, the music data analysed by the music analysis system 10 may be compared to predetermined data denoting a known piece of music being played by the instrument 1 , to compare the sensed music data to a predetermined set of music data. In this way, the music analysis system 10 may provide feedback to the user on one or more discrepancies in the music being played by the instrument 1 determined by differences between the sensed music data and the predetermined data. In summary, the system 10 and method typically comprise the following general steps: a user plays music on a musical instrument 1 ; a microphone 14 receives the sound, and a vibration sensor 12 measures vibration of the instrument 1 ; a computing module 17 analyses the microphone 14 output and vibration sensor 12 data, and recognises components of the sound as music being played, and interprets the music (e.g., as notes or chords), encoding the recognised music into music data; and sends the music data in digital form to the user device 3.

The user device 3 may be a computing device such as a PC or a laptop, or may be a smart phone or tablet device, for example.

In embodiments of the technology, the music data is used as a control input to control an application such as a game or a music training application running on the user device 3.

In some embodiments of the technology, the computing module 17 also detects sounds in the received audio that are not music, and cleans the music signal from extraneous sounds. In some embodiments, the computing module 17 filters out sounds that are not produced by the instrument 1 that is the subject of the analysis, based on the vibration sensor 12 data. For example, if the musical instrument 1 is not being played (and the vibration being detected is below a given threshold) then no part of the audio is produced by the instrument 1.

With reference to Figure 4 of the drawings, in embodiments of the technology, the sensor device 2 provides a data collection module 11 , a computing module 17 for calculating and analysing the sounds being sensed, communication module 18, and power supply 19. As shown in Figure 4, the data collection module 11 provides a vibration sensor 12, and vibration sensor preamplifier 13. Typically the data collection module 11 also includes a microphone 14 and microphone preamplifier 15, and audio codec chip 16.

The computing module 17 is a module chipset providing a microcontroller 44, an audio digital signal processor (DSP) 46, and a chip for neural network computation 48.

The communication module 18 is preferably a microcontroller with Bluetooth module, and/or may include other wireless communication modules as are known in the art.

Power supply 19 is preferably a battery, which may be a rechargeable battery.

The battery may be charged in situ as part of the sensor device 2, or may be removable for charging. In addition or instead of a battery, the sensor device 2 may provide a wired power connection. This is less preferable to using a battery due to the effect of the cable connection on the vibration of the sensor device 2.

With reference to Figure 5 of the drawings, the steps carried out by the computing module 17 embodied by the music analysis system 10 are illustrated.

Subsequently, the computing module 17 outputs music data from the sensor device via a communication module 18. A Bluetooth stack module may be implemented, for example.

In other embodiments, one or more components of the computing module 17 may be provided at the user device 3, so that some or all of the processing occurs on that device 3 rather than at the sensor device 2. In other words, in these other embodiments, data regarding the audio received by the microphone(s) and vibration data sensed by the sensor device is transmitted to the user device 3 (again via Bluetooth or any other suitable communications protocol), so that the processing of the audio and vibration sensor data is carried out mainly on the user device 3. In other words, the computing module 17 and its components as described above, may be implemented on a device other than the sensor device 2.

In some embodiments, and as shown in Figure 10 for example, a microphone 14 may be provided remote from the sensor device 2, and the output of the microphone (and its preamplifier 15 and optionally audio code chip 16) may be transmitted via a transmitter 50 and received at the sensor device 2 by a receiver 52. The microphone preamplifier 15 and audio codec chip 16 in such embodiments may be provided either on the sensor device 2 or with the remove microphone. In these embodiments, where one or more microphones 14 are provided externally to the sensor device 2, this may be in addition to or instead of providing a microphone 14 on the sensor device 2 itself. Further, in embodiments in which the processing is performed at the user device 3, the remote microphone(s) may communicate the data directly to the user device 3, as illustrated in Figure 11 , for example.

In broad terms, an aspect of the invention provides a specialised music controller for organising a system of learning music by playing games, which includes (see fig 2): a musical instrument (in this case, a recorder), and a music analysis system 10 which provides music data, acting as a specialised game controller - used for recognising the notes being played on that instrument.

An application for a user device 3 (e.g., smartphone, PC, video game console or other) - receives the music data (e.g., recognised notes, control commands and signals) from the music analysis device. This way the game for example is controlled by playing the melody on the musical instrument. The user device 3 may be connected to a further device 5 such as a display device for example. In this way, the user device 3 may carry out processing steps on the music data received from the sensor device 2 (and optionally from external microphones 4), and a resulting output image or video output in the context of a game, is output to the display device 5. The described system 10 provides the following: sound detection with minimal latency and direct transfer of the commands to end-user device; defines such music data as change and deviation of the sound tone, sound tone, volume, duration and detects errors in the technique of execution; performs automatic adaptation to the instrument and artist; the sensor device 2 provides a small autonomous device that does not interfere with the performer, mounted directly on the instrument, has battery power and communicates with the user device (preferably using Bluetooth); possesses high resistance to interference and external noise since it captures audio directly from a musical instrument using the built-in vibration sensor and microphone.

The sensor device 2 consists of hardware and software (firmware) components.

The sensor device 2 is a small device providing a body that attaches directly to an instrument 1 using an adaptive mount 36. At the same time there is no need for any change in the musical instrument 1 itself, allowing it to be connected to and disconnected from an instrument 1 without altering the instrument 1.

As illustrated in Figure 4, according to embodiments of the described technology, the sensor device 2 includes the following parts: the data collection module 11 , includes vibration sensor 12, the amplifier for vibration sensor 13 (an amplifier with very high input impedance, matching circuits, and compressor), microphone 14 with preliminary amplifying 15 and matching circuits, and audio codec chip 16 carrying out amplitude, phase and amplitude frequency correction of the signal and analogue-to-digital conversion; the computing module 17. The computing module 17 chipset contains a microcontroller 44, an audio DSP 46, and a neural network accelerator 48. The computing module 17 is needed for developing the loT Edge computing architecture. This module supports the hardware-mathematical operations required for DSP and Al based on TensorFlow Lite; communication module 18 for implementation of Bluetooth connectors; The module is based on a microcontroller with a wireless interface, supporting various communication protocols (Bluetooth is preferred); the power supply 19 of the device based on the battery. The battery may be a rechargeable battery that is either integrally formed with the device and recharged via a connection of a power charging cable, or else is removable for charging separately, or else may comprise one or more replaceable power cells.

The software I firmware for the music analysis device of embodiments of the described technology is preferably multithreaded and consists of the following components which interact as shown in Figure 5.

Looking now at Figure 5 which describes the data flow within the sensor device 2 of Figure 4, we describe the steps as follows.

Analog frontend processing step 21

This processing step receives two data streams, from the microphone 14 (which is a vibration-isolated microphone), and a vibration sensor 12 with good contact with the body of the musical instrument, and their two respective amplifiers 13, 15. The conversion of the analog signal(s) is performed by a chip for converting acoustic signals. This is an acoustic codec chip 16. In addition to the ADC, it preferably contains variable gain input amplifiers and a mixer.

Signal splitting and cleaning at step 22

This step is implemented using a sound processing chip; audio digital signal processor (DSP) 46. In embodiments, a HiFi 3 DSP is used, but it should be appreciated that other DSPs may be used in addition, or instead. The DSP 46 receives raw digital data from the analog front end 21 and performs signal denoising.

The signal is split into two streams: one stream containing a pure music signal, and the second stream containing taps and percussive sounds (i.e., the beating or striking of a musical instrument, those sounds being made either intentionally or unintentionally as the instrument is played). The DSP 46 uses cross correlation and Finite impulse response (FIR) filters (and may use other filters as will be appreciated by the skilled person).

Feature extraction at step 23

This step is implemented using the sound processing chip (i.e. , the DSP 46 as used for signal splitting and cleaning, above) and the microcontroller 44. The stream containing the pure music signal is received from step 22 above, and is normalised. A feature vector is generated. In embodiments of the technology, this is performed using a 512 point data window with 50 percent overlap, and the feature vector consists of fundamental frequency, local autocorrelation peaks, number of zero crossings, variance, mean value. It should be understood that a different data window and overlap could be used to generate the feature vector.

In embodiments, the main DSP operation for constructing the feature vector is autocorrelation. It should be appreciated that the following algorithms can also be used to analyse polyphony, as alternatives: Gabor transform spectral analysis, Gammatone filter, and Constant-Q transform.

Tap event detection at step 24

Tap event detection is implemented using the sound processing chip (DSP 46) and the microcontroller 44. In this step, the stream containing all taps and percussive noises is received.

A set of matched filters is used, with auto correlation, to determine parameters of the tap event (which may include time, energy, duration, basic tone).

Event detection at step 25

This step is implemented using the microcontroller 44 and the chip for neural network computation 48 (i.e., a neural network accelerator).

The music feature vector and the tap event parameters are both received, and processed, to define music data: i.e., the parameters defining the music and the player's actions. To achieve this, a Convolutional Neural Network and Bayesian Predictive Methods are used to determine the nature of the music being played, using the tap event parameters to assist in establishing which notes are being played by the target instrument 1 (i.e., the one to which the sensor device 2 is attached).

Data transmission and device control at step 26

This step is implemented using the communication module 18, implementing a microcontroller with a radio interface. In embodiments and as described above, the main communication protocol is Bluetooth, others may be implemented.

The communication module 18 may transmit data to the user device 3 in different user-selectable modes, including: Mixed music and percussion signal (i.e., data about the music and the percussive sounds), currently playing note code, MIDI, and in an advanced mode transmitting all detectable playing errors and music parameters.

In addition, the system 10 may perform updates to program settings (such as the mode of transmission) and upload new firmware from the user device 3, for example.

In an alternative embodiment of the system described, the steps 24, 25, 26 may be replaced with a single deep-learning module, capable of determining complex patterns in the data streams.

Looking at Figures 6 to 9 of the drawings, the sensor device 2 provides a body 30 housing the processor and other components. A communication module 32 is provided for sending data to and/or receiving data from a remote user device 3. In some embodiments, the communication module 32 may not be wireless - a wired connection may be used, for example, but more commonly the sensor device 2 provides Bluetooth connectivity or any other suitable wireless communication protocol.

The sensor device 2 further provides the hardware components of the data collection module 11 , illustrated at 34, having a vibration sensor 12 for sensing vibration of the body of the instrument 1 , and optionally a microphone 14 for receiving sound (in the typical embodiments as outlined above). A connection arrangement 36 is provided, adapted to connect the device to a portion of an instrument 1. Alternative connection arrangements may be provided for connecting a device to multiple types of instrument 1 , or to instruments 1 of different sizes or styles. The connection arrangements may be detachable and/or interchangeable.

The device 2 further provides an on/off switch 38, and one or more indicators such as LEDs for indicating a status of the device 2 (such as its battery power remaining, its connectivity to a user device, for example).

A port 42 may be provided for allowing a physical connection to be made for charging using a charging power cable (such as a USB-C socket or any other suitable connector or socket type). In embodiments, the wired connection may allow transmission of data to or from the device in addition to or instead of charging capabilities.

When used in this specification and claims, the terms "comprises" and "comprising" and variations thereof mean that the specified features, steps or integers are included. The terms are not to be interpreted to exclude the presence of other features, steps or components.

The invention may also broadly consist in the parts, elements, steps, examples and/or features referred to or indicated in the specification individually or collectively in any and all combinations of two or more said parts, elements, steps, examples and/or features. In particular, one or more features in any of the embodiments described herein may be combined with one or more features from any other embodiment(s) described herein.

Protection may be sought for any features disclosed in any one or more published documents referenced herein in combination with the present disclosure. Although certain example embodiments of the invention have been described, the scope of the appended claims is not intended to be limited solely to these embodiments. The claims are to be construed literally, purposively, and/or to encompass equivalents.