Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMPROVEMENTS IN OR RELATING TO CONTENT DETECTION
Document Type and Number:
WIPO Patent Application WO/2020/188293
Kind Code:
A1
Abstract:
A method for a mobile device comprising an image capture component and a display component. The method comprising the steps of receiving at least one input frame from the image capture component,analysing the at least one input frame to detect the presence of flagged content and filtering the at least one input frame if flagged content is detected.

Inventors:
MERCER HANNAH (GB)
Application Number:
PCT/GB2020/050747
Publication Date:
September 24, 2020
Filing Date:
March 20, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CENSORPIC LTD (GB)
International Classes:
G06V10/44
Foreign References:
US20120039539A12012-02-16
US20160034717A12016-02-04
Attorney, Agent or Firm:
LAWRIE IP LIMITED (GB)
Download PDF:
Claims:
CLAIMS

1. A method for a mobile device comprising an image capture component and a display component, the method comprising the steps of:

receiving at least one input frame from the image capture component;

analysing the at least one input frame to detect the presence of flagged content; and

filtering the at least one input frame if flagged content is detected.

2. A method according to claim 1 , further comprising:

transmitting the at least one input frame to the display component.

3. A method according to claim 1 or claim 2, further comprising storing the at least one input frame in a memory element.

4. A method according to claim 3, wherein:

the step of analysing comprises analysing the at least one input frame stored in the memory element to detect the presence of flagged content; and.

the step of filtering comprises applying a first filter to the at least one input frame stored in the memory element if flagged content is detected.

5. A method according to any preceding claim, wherein the step of analysing comprises:

identifying at least a portion of an image feature in the at least one input frame; and

comparing the identified image feature with a set of flagged features.

6. A method according to claim 5, wherein the step of identifying comprises:

detecting a potential image feature in the at least one input frame; deriving a first probability of the potential image feature being identical to at least one of a first plurality of image features by using a first identification mechanism; and selecting the one of the first plurality of image features having the highest probability value as the identified image feature.

7. A method according to claim 6, wherein a plurality of first probabilities of the potential image feature being identical to at least one of a first plurality of image features is derived.

8. A method according to claim 6 or claim 7, wherein at least one of the step of detecting or the step of deriving comprises performing at least one additional operation on at least part of the at least one input frame, the at least one additional operation being operable to increase the likelihood of either or both of the step of detecting or the step of deriving being successfully completed.

9. A method according to claim 8, further comprising:

determining that the at least one additional operation has not successfully increased the likelihood of either or both of the first or second sub-steps being successfully completed;

ignoring the at least one input frame; and

selecting at least one of a subsequent or a preceding input frame to be used in the method.

10. A method according to any of claims 1 to 4, wherein the step of analysing comprises:

identifying an image classification of the at least one input frame; and

comparing the identified classification with a set of flagged image classifications.

11. A method according to claim 10, wherein the step of identifying an image classification comprises:

deriving at least one second probability of at least a portion of at least one input frame having at least one of a plurality of possible image classifications; selecting the at least one of the plurality of possible image classifications having the highest probability value as the identified image classification.

12. A method according to claim 11 , wherein a plurality of second probabilities of at least a portion of the at least one input frame having at least one of a plurality of possible image classifications is derived.

13. A method according to claim 11 or claim 12, wherein at least one of the step of detecting or the step of deriving comprises performing at least one additional operation on at least part of the at least one input frame, the at least one additional operation being operable to increase the likelihood of either or both of the step of detecting or the step of deriving being successfully completed.

14. A method according to claim 13, further comprising:

determining that the at least one additional operation has not successfully increased the likelihood of either or both of the step of detecting or the step of deriving being successfully completed;

ignoring the at least one input frame; and

selecting at least one of a subsequent or a preceding input frame to be used in the method.

15. A method according to any of claims 11 to 14, further comprising:

comparing the at least one second probability with a threshold value and ignoring the at least one second probability if the second probability is below the threshold value.

16. The method according to any preceding claim, wherein the analysis step is performed on one or more portions of the input frame, each of said one or more portions being associated with at least one of a plurality of characteristics of the input frame.

17. The method according to claim 16, wherein the characteristics comprise one or more of: colour space; or file-specific divisions.

18. The method according to any preceding claim comprises applying a first filter to at least a portion of the input frame.

19. The method according to claim 18, wherein applying the first filter

comprises modifying at least a first characteristic of at least the portion of the at least one input frame.

20. The method according to claim 19, wherein the first characteristic

comprises a uniform filter.

21. The method according to claim 19 or claim 20, wherein the first

characteristic comprises one or more of: blurring, pixelation, a colour overlay, a mask, saturation, contrast, gamma or colour balance.

22. The method according to any of claims 19 to 21 , wherein applying the first filter comprises irreversibly applying the first filter to the at least one input frame.

23. The method according to any preceding claim, wherein the image capture component is a camera component.

24. The method according to any of claims 1 to 22, wherein the image capture component is a software application operable to be run on the mobile device, the software application being operable to receive the at least one input frame from a corresponding software application that is operable to run on a remote mobile device.

25. A mobile device comprising means for performing the method of any of claims 1 to 24.

26. A computing device comprising means for performing the method of any of claims 1 to 24.

27. A computer system comprising a plurality of computing devices, wherein the plurality of computing devices each comprise means for performing at least part of the method of any of claims 1 to 24.

Description:
IMPROVEMENTS IN OR RELATING TO CONTENT DETECTION

Field of the invention

This invention relates to detection of content in media and particularly, but not exclusively, to detection of illegal or unlawful content in media.

Background to the invention

The use of online communications and media sharing is increasing, in particular between individuals. It is becoming commonplace for many individuals to share messages, images and other media with other individuals, e.g., friends, family or even strangers. This includes the sharing of images and other media with private or explicit content, e.g., images or media with pornographic content.

‘Sexting’ (i.e. , sharing of explicit images via a messaging service) is an ever- increasing problem for children or teenagers. A study by the UK Government Children’s Commissioner has revealed that 60% of teenagers have been asked for naked photos of themselves and that 40% of have taken nude‘selfies’ (i.e., images of themselves). A further study, conducted by the UK National Society for the Prevention of Cruelty to Children, has found that, of children or teenagers that have sent nude selfies, 58% sent such images to a boyfriend or girlfriend.

However, 1 in 3 respondents in the study stated that they had sent nude selfies to a person that they did not know.

There is, therefore, a need to control or prevent the dissemination of images with such content. However, once an image has been shared (or has otherwise been transmitted from the original device), it is nearly impossible to control or prevent the further dissemination or sharing of the image. There is, therefore, a need to control or prevent such images or media from being obtained in the first place. Furthermore, there is a need for controlling or preventing such images or media from being shared or disseminated.

The inventors have appreciated the shortcomings of existing technologies in this field. Summary of the invention

In accordance with a first aspect of the invention, there is provided a method for a mobile device comprising an image capture component and a display component, the method comprising the steps of:

receiving at least one input frame from the image capture component;

analysing the at least one input frame to detect the presence of flagged content; and

filtering the at least one input frame if flagged content is detected.

The at least one input frame may be transmitted to the display component. The at least one input frame may be stored in a memory element.

The step of analysing may comprise analysing the at least one input frame stored in the memory element to detect the presence of flagged content; and the step of filtering may comprise applying a first filter to the at least one input frame stored in the memory element if flagged content is detected.

The step of analysing may comprise: identifying at least a portion of an image feature in the at least one input frame; and comparing the identified image feature with a set of flagged features.

The step of identifying may comprise:

detecting a potential image feature in the at least one input frame;

deriving a first probability of the potential image feature being identical to at least one of a first plurality of image features by using a first identification mechanism; and

selecting the one of the first plurality of image features having the highest probability value as the identified image feature.

A plurality of first probabilities of the potential image feature being identical to at least one of a first plurality of image features may be derived.

At least one of the step of detecting or the step of deriving may comprise performing at least one additional operation on at least part of the at least one input frame, the at least one additional operation being operable to increase the likelihood of either or both of the step of detecting or the step of deriving being successfully completed. The method may further comprise:

determining that the at least one additional operation has not successfully increased the likelihood of either or both of the first or second sub-steps being successfully completed;

ignoring the at least one input frame; and

selecting at least one of a subsequent or a preceding input frame to be used in the method.

The step of analysing may comprise:

identifying an image classification of the at least one input frame; and comparing the identified classification with a set of flagged image classifications.

The step of identifying an image classification may comprise:

deriving at least one second probability of at least a portion of at least one input frame having at least one of a plurality of possible image classifications; selecting the at least one of the plurality of possible image classifications having the highest probability value as the identified image classification.

A plurality of second probabilities of at least a portion of the at least one input frame having at least one of a plurality of possible image classifications may be derived.

At least one of the step of detecting or the step of deriving comprises performing at least one additional operation on at least part of the at least one input frame, the at least one additional operation being operable to increase the likelihood of either or both of the step of detecting or the step of deriving being successfully completed. The method may further comprise:

determining that the at least one additional operation has not successfully increased the likelihood of either or both of the step of detecting or the step of deriving being successfully completed;

ignoring the at least one input frame; and selecting at least one of a subsequent or a preceding input frame to be used in the method.

The method may further comprise comparing the at least one second probability with a threshold value and ignoring the at least one second probability if the second probability is below the threshold value.

The analysis step may be performed on one or more portions of the input frame, each of said one or more portions being associated with at least one of a plurality of characteristics of the input frame. The characteristics may comprise one or more of: colour space; or file-specific divisions.

The method may comprise applying a first filter to at least a portion of the input frame. Applying the first filter may comprise modifying at least a first characteristic of at least the portion of the at least one input frame. The first characteristic may comprise a uniform filter. The first characteristic may comprise one or more of: blurring, pixelation, a colour overlay, a mask, saturation, contrast, gamma or colour balance. Applying the first filter may comprise irreversibly applying the first filter to the at least one input frame.

The image capture component may be a camera component. The image capture component may be a software application operable to be run on the mobile device, the software application being operable to receive the at least one input frame from a corresponding software application that is operable to run on a remote mobile device.

In accordance with a second aspect of the invention, there is provided a mobile device comprising means for performing any of the methods as set out above.

In accordance with a third aspect of the invention, there is provided a computing device comprising means for performing any of the methods as set out above.

In accordance with a third aspect of the invention, there is provided a computer system comprising a plurality of computing devices, wherein the plurality of computing devices each comprise means for performing at least part of any of the methods as set out above.

Brief description of the drawings

An embodiment of the invention will now be described, by way of example, with reference to the drawings, in which:

Figure 1 schematically illustrates an exemplary device;

Figure 2 shows an exemplary method in accordance with a first

embodiment of the invention;

Figure 3 schematically illustrates the method of Figure 2;

Figure 4 shows an exemplary method in accordance with a second embodiment of the invention;

Figure 5 schematically illustrates the method of Figure 4;

Figure 6 illustrates a step of the exemplary method shown in Figure 4;

Figure 7 schematically shows the step of Figure 6;

Figure 8 shows an exemplary method in accordance with a third

embodiment of the invention;

Figure 9 schematically illustrates the method of Figure 8;

Figure 10 illustrates a step of the exemplary method shown in Figure 8;

Figure 11 schematically shows the step of Figure 10;

Figure 12 illustrates an exemplary method in accordance with a fourth embodiment of the invention;

Figure 13 shows schematically the method of Figure 12;

Figure 14 shows an exemplary method in accordance with a fifth embodiment of the invention;

Figure 15 shows an exemplary method in accordance with a sixth embodiment of the invention;

Figure 16 illustrates an exemplary method in accordance with a seventh embodiment of the invention; and

Figure 17 schematically illustrates exemplary methods in accordance with an eighth embodiment of the invention. Description of the preferred embodiments

Before describing the exemplary embodiments of the invention, it may be illustrative to describe an exemplary environment in which the exemplary embodiments may be implemented. It will, of course, be appreciated that the following environment is exemplary only, and not intended to be limiting. Other environments, comprising alternative or additional components, may easily be envisaged.

Figure 1 schematically illustrates a mobile device 100 (e.g., a mobile phone, tablet device, personal computer, camera or other electronic device). The mobile device comprises an image capture component 102, a central processing unit 104, a memory element 106 and a display component 108. It will be appreciated that the mobile device may comprise additional components which, for purposes of conciseness and ease of explanation only, are not shown in Figure 1. Such components include (without limitation): communication components (e.g., wireless transceivers); illumination components, control/interaction components or components providing additional functionality (e.g., GPS or NFC components).

The image capture component 102 may comprise any suitable elements or features. In some examples, the image capture component is a camera

component comprising at least one image capture element and an imaging element (e.g., one or lenses, prisms, gratings or other optical components). In other examples, the image capture component is a component that is connected to a camera component, including (but not limited to) a camera control application; a remote connection component that is connected to a remote camera

component; a software application associated with handling of input provided by a camera component; or software application associated with handling of input provided by a second software application connected with a camera component.

In an example, the image capture component is a software application that interfaces with a camera component. In other examples, the image capture component is operable to receive at least one input frame from an image providing component, which may be located remotely from the mobile device. In some examples, the image capture component is a software application that interfaces with a remotely located corresponding software application (e.g., by way of the communications interface of the mobile device). In an example, the image capture component is a media sharing software application. In another example, the image capture component is an instant messaging application. In yet another example, the image capture component is a social media application.

In some examples, the display component 108 is a display screen. In some examples, the display component may be a viewfinder (e.g., as may be found on a digital camera). In some examples, the mobile device comprises a plurality of display components. During use, the display component will typically be operable as a viewfinder to the image capture component, displaying objects currently being imaged by the image capture component.

Mobile devices (e.g., mobile phones, tablets, digital cameras or laptop computers) are in common usage and are one of the most common devices for capturing images and/or video clips. Given the ubiquity of such devices, it is inevitable that some users capture images with private or adult content, whether intentionally or not. For example, some users may capture images of other persons in a state of partial or complete nudity. In some cases, such images may have the consent of the other persons, but in other instances the user does not seek consent before capturing the images.

Due to their nature, images containing nudity or other explicit or potentially compromising material can be used for a number of illegal or morally questionable purposes. Examples, of this include revenge pornography (also referred to more generally as‘image-based sexual abuse’), online abuse or shaming, sexting, upskirting, or downblousing. Given the high mobility and typical small size of mobile device, use of such devices for illegitimate, unlawful or illegal purposes is common. There is, therefore, a need to limit or prevent capture and distribution of images with illegal or questionable content.

Furthermore, once images with illegal or questionable content are acquired, it is difficult for a person (e.g., a victim of revenge pornography or upskirting) to limit or prevent distribution of such images. The inventor has realised that an effective method for prevention of capture and distribution of questionable imagery is by preventing such images from being captured or acquired in the first place.

However, the inventor has further realised that there is a need for preventing or limiting further distribution of such images once they are acquired.

A first exemplary method in accordance with the present invention will now be described with reference to Figure 2 and Figure 3. The method may, for example be implemented in a mobile device comprising an image capture component and a display component (such as described with reference to Figure 1 ).

In a first step 210, at least one input frame 310a is received from an image capture component 302. The input frame may be received in any suitable fashion and from any suitable image capture component. In some examples, the image capture component is internal to the mobile device (not shown) or forms an integral part of the mobile device. In an example, the image capture component is a camera component such as may be found on a mobile telephone. Such camera components are typically integrated within the casing of the mobile device.

In some examples, the image capture component is external to the mobile device, but connected therewith by a suitable connection. This approach may be used in situations where a user has a mobile device that does not include a camera, or in which a user may wish to use a specific camera. In an example, the image capture component is an external camera connected to the mobile device by a suitable connection. Suitable connections may include (without limitation): wired connections (e.g., USB, HDMI or other proprietary or open standards); or wireless connections (e.g., Bluetooth, ZigBee, WiFi, infrared or RF). The image capture component may, in some examples, form part of another device or vehicle, such as (without limitation): surveillance cameras or units; remote-controlled vehicles (e.g., drones); other camera- or sensor-containing devices or apparatuses.

In some examples, the image capture component is a software application that is installed on or otherwise operable to be executed by the mobile device. In an example, the image capture component is a software application that interfaces with a camera component. In other examples, the image capture component is operable to receive at least one input frame from an image providing component, which may be located remotely from the mobile device. In some examples, the image capture component is a software application that interfaces with a remotely located corresponding software application (e.g., by way of the communications interface of the mobile device). In an example, the image capture component is a media sharing software application. In another example, the image capture component is an instant messaging application. In yet another example, the image capture component is a social media application.

It will be appreciated that, whilst only a single such image capture component is described in the present example, the mobile device could, in principle, equally well comprise or be connected with a plurality of image capture components. For example, some mobile devices comprise camera components on opposing sides thereof (e.g., a“front” camera and a“rear” camera). It will be appreciated that the present example, as well as those described in the following, is applicable irrespective of which image capture component, or the number thereof, is used to capture the input frame.

The first step may be performed by any suitable component of the mobile device. In some examples, the input frame may be temporarily stored in a memory component 306 of the mobile device or may be stored in a temporary memory storage of the mobile device. In other examples, the input frame is transmitted from the image capture component directly to a processing or analysis component (e.g. an image analysis component 312 or the central processing unit 304).

In a second step 220, the at least one input frame is analysed to detect the presence of flagged content. The analysis step may be carried out in a suitable fashion. In some examples, the analysis step is performed by the central processing unit 304 of the device. In other examples, the analysis step is performed by an image analysis component 312. The image analysis component may be implemented in a suitable fashion, including (without limitation):

In some examples an image analysis component 312 or algorithm is used to detect the presence of image features 314 or objects within the at least one input frame. The analysis step may be carried out in any suitable manner. The specific analysis performed on the input frame may, in some examples, depend on the properties, characteristics or parameters of the input frame. For example, some input frames may be encoded or formatted with a specific format or file type.

Exemplary encodings or file formats include (but are by no means limited to):

proprietary or non-proprietary raw image formats (e.g., IIQ, 3FR, DCR, K25, KDC, CRW, CR2, CR3, ERF, MEF, NEF, ORF, PEF, RW2, ARW, SRF or SR2); TIFF; or DNG.

The analysis may be carried out on any relevant portion of the input frame. In some examples, the analysis is performed on substantially the entirety of the input frame. In some input frame formats, individual frames may be broken down into a plurality of portions or sub-sections, wherein each of such portions or sub-sections being associated with or representing one or more specific characteristics, parameters or properties of the input frame. In some examples, the

characteristics, parameters or properties include (without limitation): colour space (e.g., CMYK, RGB, YUV, HSB, HSL, YIQ or TSL); or file-specific divisions (e.g., so-called layers’). In other examples, the input frame format may be of a specific type in which the content is progressively rendered, loaded or updated. In such examples, the analysis may be performed on the input frame in any relevant or suitable stage of rendering, loading or updating.

It will, of course, be appreciated that a number of specific analysis methodologies may be employed to perform feature or objection detection within the at least one input frame. In some examples, as described above, the specific methodology may depend on the properties or characteristics of one or more of: the at least one input frame; the image capture component; the central processing unit, the display component or the image analysis component. It will also be appreciated that a number of known analysis methodologies exist, including (without limitation):

TensorFlow; TensorFlow Lite Classify; or TensorFlow Lite Detect.

The second step 220 may be performed by any suitable component or element of the mobile device or by an element suitably connected to the mobile device. In some examples, the second step is performed by a central processing unit 304 of the mobile device. In other examples, the second step is performed by an image analysis component 312. In yet other examples, the second step is performed by a processing component (not shown) located remotely from the mobile device. In such an example, the input frame received from the image capture component is transmitted to a remote processing unit (e.g., located in a remote server or“cloud” device), subsequent to which it is retransmitted to the mobile device.

In some examples, the analysis step may comprise a plurality of sub-steps, as will be described in more detail in following examples.

In a third step 230, the at least one input frame 310b is filtered if flagged content is detected. By filtering an input frame containing flagged content, display or reproduction of that content may be prevented or discouraged without impacting the operation of the image capture component or display component.

The at least one input frame may be filtered in any suitable fashion. In some examples, the filtering step comprises applying a first filter to at least a portion of the at least one input frame. In an example, applying a first filter comprises modifying at least a first characteristic of at least the portion of the at least one input frame. In some examples, the first characteristic is modifying in a

substantially reversible fashion. In other examples, the first characteristic is modified in a substantially irreversible fashion. The first characteristic may comprise any suitable characteristic or parameter associated with the input frame. In some examples, the first characteristic comprises a uniform filter. In other examples, the uniform characteristic comprises an image effect, including (but not limited to) blurring, pixelation, or other similar effect. In some examples, the first characteristic comprises a colour overlay or mask. In some examples, the first characteristic comprises a setting associated with the input frame, e.g., (without limitation) saturation, contrast, image“gamma” or colour balance.

In a specific example, the filtering step comprises irreversibly applying a uniform black colour to the entirety of the input frame. By applying the first filter to the whole or a part of the input frame, any flagged content in the input frame can be occluded or the viewing thereof otherwise prevented. In addition to preventing a user from viewing such content, it is also prevented that permanent images can be obtained of the flagged content.

Furthermore, the application of a visible filter to an input frame serves as an indicator to a user that the content in the input frame is of a flaggable nature.

In use, the above-described method will apply a filter to any input frames in which flagged content is present. If an input frame does not contain any flagged content, the filter is not applied or otherwise altered (whether reversibly or irreversibly).

This prevents the imaging (and storage) of any flagged content whilst preserving full functionality of the image capture device and the mobile device more generally. Contrary to known methods in known devices, all of the elements and components of the mobile device remain fully functional during use. In known devices, if flagged content is detected, further imaging or storage of such content is typically prevented by either interrupting the function of the image capture component or by interrupting the function of the display component. Whilst this accomplishes the goal of preventing imaging or storage of flagged content, the interruption of functionality typically requires restarting or reinstating the

functionality of the interrupted components. This requires device resources, which in mobile devices may be limited, and imposes a delay on the user being able to operate the device, e.g., due to firmware or applications being restarted or restored.

It will be appreciated that, although described in the above example in sequence, the method steps may, in some examples, be performed in an alternative sequence or in parallel. This will be described in more detail in the following.

Furthermore, the exemplary method may in some examples comprise additional and/or optional steps. A number of these will now be described in more detail. Whilst these additional steps are described in sequence in the present example, it will be appreciated that, in some examples, the steps may be performed in a different order than the one described. It will further be appreciated that, in some examples, not all of the described additional and/or optional steps are performed. In some examples, the method comprises a step 240 of transmitting the at least one input frame 310b to the display component 308. In an example, the

transmission of the at least input frame enables a user to discern the contents of the input frame and to decide on any further action. In effect, the display component functions as a viewfinder for the image capture component.

In some examples, and as described above, the method comprises a step 250 of storing the at least one input frame in a memory element 306. The storing step may be an automatic step, or it may require one or more prompts or inputs before it is carried out. For example, the user may be required to press a button or otherwise interact with the mobile device in order for the storing step to be carried out. In some examples, the at least one input frame is stored in a non-volatile or permanent memory element of the device. In other examples, the at least one input frame is stored in a volatile or non-permanent memory element of the device. In yet other examples, the at least one input frame is stored in a memory element connected to the device but not integral therewith, including (but not limited to): a flash memory element; or a remotely located memory element.

During normal operation of the method described above (as well as the methods described in the following), input frames containing flagged content are filtered.

For example, if the image capture component 302 acquires an input frame containing flagged content (e.g., by a user pointing a camera component towards flagged content, such as a person in a state of nudity), the display component of the device may be blacked out to prevent acquisition of images. Flowever, it is inevitable that there is a delay between flagged content being contained in an input frame and the filtering step occurring due to the need for the central processing unit (or indeed any other processing or computing element) to carry out the analysis step described above. Even though this delay may be extremely short (e.g., in the order of milliseconds), it is, in theory, possible that an image could be acquired and/or stored in a memory element (as described above) during this delay. In order to overcome this, in some examples, the step of analysing 220 comprises analysing the at least one input frame stored in the memory element to detect the presence of flagged content, and the step of filtering 230 comprises applying a first filter to the at least one input frame stored in the memory element if flagged content is detected. Any suitable memory element may be used to store the at least one input frame, including (but not limited to): permanent memory elements; or non-permanent or volatile memory elements.

A second exemplary method in accordance with the present invention will now be described with reference to Figure 4 and Figure 5. For ease of comparison with Figure 2 and Figure 3, elements of Figure 4 and Figure 5 similar to corresponding elements of Figure 2 and Figure 3 are labelled with reference signs similar to those used in these Figures, but with prefixes“4” and“5” instead of“2” and“3”.

In a first step 410, at least one input frame 510a is received from the image capture component 502. The input frame may be received in any suitable fashion and from any suitable image capture component. In an example, the at least one input frame is received in a manner substantially similar to that described in the above example.

In a second step 420, the at least one input frame is then analysed to detect the presence of flagged content, as substantially described above. In the present example, the analysis step is carried out by a central processing unit 504 of the mobile device. The analysis step, as described with respect to Figures 2 and 3, may be carried out in any suitable fashion and may comprise any suitable number of discrete steps. Purely by way of example, the present second step 420 comprises a first analysis step 421 and a second analysis step 422.

In the first analysis step 421 , at least a portion of an image feature 514 is identified in the at least one input frame. The image feature, or a portion thereof, may be identified in a suitable manner by way of a suitable algorithm or other mechanism. It will be appreciated that a number of suitable methodologies exist for this purpose that may be used in the present exemplary method.

In the second analysis step 422, the identified image feature 520 is compared with a set of flagged content 516. The comparison step may be carried out in any suitable manner. The set of flagged content may be obtained from any suitable source. In some examples, the set of flagged content is stored in the memory of the mobile device. In other examples, the set of flagged content is stored remotely from the mobile device. In other examples, set of flagged content is stored on a central server, but is transmitted to the mobile device prior to the comparison step, e.g., in response to a transmission request transmitted by the mobile device to the server. This allows the mobile device to constantly utilise the most up-to-date set of flagged content.

In a further step 430, the at least one input frame 510b is filtered if flagged content is detected. The filtering step may be carried out in any suitable fashion. In some embodiments, the filtering step is carried out substantially as described in one or more of the examples discussed above. In an example, the method comprises a step of transmitting the at least one input frame 510b to the display component 508

As described above, a plurality of suitable identifying methodologies may be employed in order to identify one or more image features, or portions thereof, within the at least one input frame. One such exemplary identifying step 621 will now be discussed with reference to Figures 6 and 7. For ease of comparison with Figure 4 and Figure 5, elements of Figure 6 and Figure 7 similar to corresponding elements of Figure 4 and Figure 5 are labelled with reference signs similar to those used in these Figures, but with prefixes“6” and“7” instead of“4” and“5”.

In a first sub-step 621 a, a potential image feature 714 is detected in the at least one input frame 710. The detection may be carried out in a suitable manner. Examples of suitable detection mechanisms include, but are by no means limited to: TensorFlow Lite Classify; or TensorFlow Lite Detect.

In the second sub-step 621 b, a first probability of the at least one image feature 714 being identical to at least one of a first plurality 718 of possible image features is derived by using a first identification mechanism. Whilst only a single first probability is mentioned in the above, it is, in principle to derive a plurality of first probabilities, each probability in the plurality of first probabilities corresponding to a particular possible image feature. The derivation step may return any suitable or advantageous number of probabilities and/or respective possible image features. In an example, the derivation step returns two possible image features including the respective probabilities. In another example, the derivation step returns three image features as well as their respective probabilities. In another example, the derivation step returns four possible image features as well as their respective probabilities. In yet another example, the derivation step returns five possible image features as well as their respective probabilities.

The specific number of first probabilities derived in the derivation step may be dependent on a number of factors, including (without limitation): available processing power, available memory, transmission speed, one or more possible image classifications or input frame characteristics. Deriving a plurality of probabilities may be an advantage in some circumstances, for example in circumstances wherein a particular portion of an input frame could be categorised in several classifications. This can, for example, be due to the quality of the input frame or due to limitations in the identification methodology or in the associated data (e.g., training data). In some instances, it may be a combination of several of the above factors.

Purely by way of example, an input frame might depict a third person in a state of partial or full nudity. Simultaneously, the input frame may be imaged under less than ideal circumstances (e.g., at low light levels or at camera settings not suited to the surroundings). Further, the input frame may only show portions of the third person, which may render the positive identification of specific features more difficult. By deriving a plurality of probabilities, the probability of determining whether an input frame contains flaggable content increases. Whilst it may, for example, be difficult to determine whether a given feature is an arm, a leg or a different part of a third person’s body, which would result in low probabilities being returned for specific bodily features, the probability of the feature being‘exposed skin’ may be significantly higher. In such a case, it may be reasonable to assume that the input frame depicts nudity, which may constitute flaggable content, even if the specific body part may not be positively identifiable.

The first probability may be derived in any suitable fashion using a suitable probability deriving mechanism. The probability deriving mechanism may comprise any suitable number or type of identification algorithms or processes.

The first plurality of possible image features may be comprised in an image feature database. In some examples, the image feature database comprises image features associated with flagged content. In some examples, the database comprises image features associated with flagged content and non-flagged content.

The image feature database may be compiled in any suitable or convenient manner. In some examples, the database comprises image features that have been identified and catalogued by one or more users. In some examples, the database comprises image features that have been identified and catalogued by one or more automated algorithms (e.g., by way of one or more machine learning algorithms). In other examples, the database comprises images features identified and catalogued in both of the above ways. It will be appreciated that there are many ways in which a database for use with the present method may be compiled.

It will be appreciated that, in some examples, one or both of the first 621 a or second 621 b sub-steps may comprise at least one additional operation that are carried out on at least part of the at least one input frame, the at least one additional operation being operable to increase the likelihood of either or both of the step of detecting or the step of deriving being successfully completed. For example, in some instances it may be necessary or advantageous to carry out such additional operations in order to enable or facilitate either or both of the first or second sub-steps. In one such non-limiting example, at least part of the input frame is modified to facilitate the carrying out of the first or second sub-steps. In a specific example, the input frame is brightened (e.g., by modification of one or more characteristics of the input frame). Brightening the input frame may in some circumstances improve detectability of the various features in the input frame, for example if the input frame is acquired under low-light or low-visibility

circumstances. In another specific example, the input frame is darkened. This may, for example, be relevant or advantageous if the input light levels on acquisition of the input frame are so high as to make it difficult to detect features or elements within the input frame. It will be appreciated that other situations or circumstances wherein modification of at least a portion of the input frame may be advantageous or necessary may be easily envisaged within the scope of the present disclosure.

In some examples, wherein the input frame is modified as described above, it may be determined that, upon completion of any necessary additional operations on the input frame, the additional operations have not successfully increased the likelihood of either or both of the first or second sub-steps being successfully completed. In order to mitigate such circumstances, in some examples, the identifying step comprises an additional sub-step of ignoring the at least one input frame and to select at least one of a preceding or subsequent input frame to be used in the method. For example, in line with an example discussed above, if it is determined that a particular input frame cannot be sufficiently brightened to allow any features or elements to be identified to a sufficient degree, the input frame is ignored and a subsequent input frame is analysed instead. If only a single input frame is available (e.g., if an input frame stored in a memory component is being analysed), a notification may be issued and/or displayed to a user.

In the third sub-step 621 c, the at least one of the first plurality of image features having the highest probability value is selected as an identified image feature 720.

Subsequently to the third sub-step being carried out, the identified image feature may be compared with the set of flagged content substantially as described above. In the event of a match, a suitable filter may be applied to the input frame.

In the above examples, image features have been identified by way of a suitable identification mechanism or algorithm. However, as will be appreciated, there are a number of alternative mechanisms or algorithms that could, in principle, be used. A third exemplary method in accordance with the present invention will now be described with reference to Figure 8 and Figure 9. For ease of comparison with Figure 4 and Figure 5, elements of Figure 8 and Figure 9 similar to corresponding elements of Figure 4 and Figure 5 are labelled with reference signs similar to those used in these Figures, but with prefixes“8” and“9” instead of“4” and“5”. Furthermore, for purposes of conciseness and clarity, only features and elements that differ substantially from corresponding features and elements described above will be discussed in detail in the following.

In a first step 810, at least one input frame 910a is received from an image capture component 902 in a manner substantially similar to that described above.

In a second step 820, the at least one input frame 910a is analysed to detect the presence of flagged content 916. In the present example, the analysis step is carried out by a central processing unit 904 of the mobile device. In the present example, the step of analysing comprises a first analysis step 821 and a second analysis step 822. It will be appreciated that these steps are substantially similar to those described with respect to Figures 4 and 5 above.

In the first analysis step 821 , an image classification 922 of the at least one input frame 910a is identified. The image classification may be identified in any suitable fashion. In the present example, the step of identifying an image classification comprises categorising at least a portion 924 of the input frame into at least one of a plurality of possible image classifications. The possible image classifications may, in some examples, comprise pre-determ ined classifications or categories. In other examples, the possible image classifications comprise dynamically generated classifications or categories. In further examples, the possible image classifications comprise both pre-determined and dynamically generated classifications or categories. An exemplary classification methodology will be described in more detail below.

In the second analysis step 822, the identified image classification 922 is compared with a set of flagged content 916. In the present example, the flagged content comprises at least one flagged image classification. The comparison may be carried out in any suitable manner. In some examples, the set of flagged image classifications is a pre-determ ined set of flagged image classifications. In other examples, a set of flagged image classifications is dynamically generated prior to the comparing step. In some examples, the set of flagged image classifications is continually updated at specified intervals (e.g., by a central authority or coordinating entity).

The set of flagged image classifications may be stored in the mobile device (not shown), e.g., in a memory element (not shown). Alternatively, the set of flagged image classifications may be stored remotely from the mobile device, e.g., on a central or distributed server or storage device. In other examples, set of flagged image classifications is stored on a central server, but is transmitted to the mobile device prior to the comparison step, e.g., in response to a transmission request transmitted by the mobile device to the server. This allows the mobile device to constantly utilise the most up-to-date set of flagged image classifications.

In a further step 830, the at least one input frame 910b is filtered if flagged content is detected. The filtering step may be carried out in any suitable fashion. In some embodiments, the filtering step is carried out substantially as described in one or more of the examples discussed above.

In an example, the method comprises a step of transmitting the at least one input frame 910b to the display component 908.

As described above, the skilled person may envisage a number of specific classification methodologies within the scope of the present disclosure. One such exemplary identifying 1021 step will now be described with reference to Figure 10 and Figure 11. For ease of comparison with preceding Figures, elements of Figure 10 and Figure 11 similar to corresponding elements of the preceding Figures are labelled with reference signs similar to those used in these respective Figures, but with prefixes“10” and“11”. In a first sub-step 1021a, at least one second probability 1126 of at least a portion 1124 of at least one input frame 1110 having at least one of a plurality of possible image classifications is derived. Any suitable portion of the input frame may be selected and/or used for purposes of the derivation. In some examples, a suitable portion of the input frame may be pre-selected prior to the derivation sub-step. In an example, the analysis step comprises an optional portion detection sub-step 1021 c.

In some examples, the size and/or position of the suitable portion is determined and selected on a frame by frame basis. In other examples, the suitable portion has a fixed size and/or position. In some examples, substantially the entirety of the input frame is used in the derivation step. In some examples, the entirety of the input frame is subjected to a modification, reduction or decomposition step prior to the derivation being carried out. By ensuring that the derivation step is performed only on a relevant portion of an input frame or on a modified, reduced or decomposed input frame, the use of system or processing resources in the mobile device can be optimised. This may reduce the time required to process individual input frames.

Any suitable number of second probabilities 1126 of the image frame having at least one of the plurality of possible image classifications may be derived. In some examples, a single possible image classification is derived. This keeps the required processing or system resources to a minimum. In other examples, a plurality of possible image classifications are derived. Whilst this requires additional processing or system resources, it may be advantageous or necessary for a number of reasons (some of which will be discussed in more detail below).

The first plurality of possible image classifications may be comprised in an image classification database (not shown). In some examples, the image classification database comprises image classifications associated with flagged content. In some examples, the database comprises image classifications associated with flagged content and non-flagged content. The image classification database may be compiled in any one of a number of suitable or convenient manners. In some examples, the database comprises image classifications that have been identified and catalogued by one or more users. In some examples, the database comprises image classifications that have been identified and catalogued by one or more automated algorithms (e.g., by way of one or more machine learning algorithms). In other examples, the database comprises image classifications identified and catalogued in both of the above ways. It will be appreciated that there are many ways in which a database for use with the present methods may be compiled.

The derivation step may be performed in any suitable fashion and using any suitable derivation methodology, including (but not limited to): TensorFlow Lite Classify; or TensorFlow Lite Detect.

In a second sub-step 1021 b, at least one of the plurality of possible image classifications having the highest probability value is selected as the identified image classification 1 122. It is to be noted that this selecting sub-step is described purely for exemplary purposes, and that other selection criteria may equally well be implemented or applied.

In an example, once the probabilities for each of the plurality of possible image classifications have been derived, a sub-set of the plurality of possible image classifications having the highest probabilities are evaluated. In an example, the possible image classifications having the highest 3, 4, 5 or 6 probabilities are evaluated. If some or all of these probabilities relate to flagged content, a general flagged content classification is selected as the identified image classification.

In a manner similar to that described above, it will be appreciated that, in some examples, one or both of the first 1021 a or second 1021 b sub-steps may comprise at least one additional operation that are carried out on at least part of the at least one input frame, the at least one additional operation being operable to increase the likelihood of either or both of the step of detecting or the step of deriving being successfully completed. In some examples, wherein the input frame is modified as described above, it may be determined that, upon completion of any necessary additional operations on the input frame, the additional operations have not successfully increased the likelihood of either or both of the first or second sub steps being successfully completed. In order to mitigate such circumstances, in some examples, the identifying step comprises an additional sub-step of ignoring the at least one input frame and to select at least one of a preceding or

subsequent input frame to be used in the method.

In the above examples, the analysis step has substantially comprised a single analysis operation. In some instances, a single analysis operation may be sufficient to determine to an acceptable degree the contents of a particular input frame. However, a single analysis step may not in all situations provide results based on which it can be determined to a sufficient degree of certainty what features are shown in an input frame.

For example, in some situations, it may be necessary to establish to a sufficiently high degree that a specific feature is classifiable in a certain classification. This may, for example, be the case if the device is used to take pictures on a beach or at a sporting event, where people may wear only limited amounts of clothing.

In other situations, it may be difficult to distinguish between various body parts due to the lighting or ambient conditions. However, correctly distinguishing between uncontroversial images and images depicting restricted content is important so as not to unduly negatively influence the user experience as well as ensure that no restricted content is depicted.

In such situations, it may be advantageous or necessary to perform one or more further additional operations to verify or to compare with the result of the analysis step. Such verification or comparison may be carried out in a number of ways. In some examples, the verification or comparison is carried out as a sub-step of the analysis step. In an example, the verification is implemented as a verification sub step intended to evaluate a secondary or additional parameter. In other examples, the verification or comparison may be carried out as a separate analysis step. The results of any further additional operations may be compared with the result of the initial analysis step. If the results of the first analysis step and the further operations are a partial or complete match, it may be concluded to a degree of certainty that the analysis steps have correctly identified a particular feature.

A number of examples of methods that comprise such further additional operations will now be described. It will, of course, be appreciated that these are purely exemplary and not intended in any way to be limiting. It will also be appreciated that the skilled person would be able to envisage alternative implementations of these exemplary embodiments within the scope of the present disclosure.

A fourth exemplary method in accordance with the present invention will now be described with reference to Figure 12 and Figure 13. For ease of comparison with Figure 8 and Figure 9, elements of Figure 12 and Figure 13 similar to

corresponding elements of Figure 8 and Figure 9 are labelled with reference signs similar to those used in these Figures, but with prefixes“12” and“13” instead of“8” and“9”. Furthermore, for purposes of conciseness and clarity, only features and elements that differ substantially from corresponding features and elements described above will be discussed in detail in the following.

In a first step 1210, at least one input frame 1310a is received from an image capture component 1302 in a manner substantially similar to that described above.

In a second step 1220, the at least one input frame is analysed to detect the presence of flagged content 1316. In the present example, the analysis step is carried out by a central processing unit 1304 of the mobile device. In the present example, the second step is comprised of a plurality of analysis steps, at least some of which are substantially similar to those described with respect to Figure 8 and Figure 9 above.

In a first analysis step 1221 , at least one image classification 1322 is identified in a manner substantially similar to that described with reference to Figure 10 and Figure 11 above. In other terms, in the present example, the identification step comprises a plurality of sub-steps that are substantially similar to the sub-steps described with reference to these Figure 10 and Figure 11. Flowever, it will, of course, be appreciated that the present example could, in principle, equally well be implemented using alternative or additional sub-steps.

In an additional analysis step 1223, the second probability associated with the identified image classification 1322 is compared with a threshold value 1330. If the second probability for any particular image classification is below the

threshold, this may indicate that the classification is not within an acceptable tolerance. Flence, that particular identified image classification may be ignored. This additional requirement of a threshold may be necessary in order to ensure to a reasonable or sufficient level of certainty that a particular classification is representative of the actual image classification.

This may, for example, be the case in situations wherein the input frame shows potentially illegal, mature and/or restricted content. If content is entirely legal, safe and uncontroversial, it is undesirable to filter it since this will negatively impact the user’s experience or prevent the user from using the full functionality of the device. There are instances where one or more features in the input frame could be identified as either illegal, mature or otherwise restricted content or legal and uncontroversial content. In such situations, it may be necessary to compare the derived probabilities with a threshold. If, purely by way of example, a specific one of the probabilities exceeds the threshold but the other does not, it can be assumed that the specific probability is not, in fact, high enough that it is likely that the input frame contains the specific feature.

Subsequent to the additional analysis step, in a second analysis step 1222, the identified image classification 1322 is compared with a set of flagged content 1316.

In a third step 1230, the at least one input frame 1310b is filtered if flagged content is detected. The filtering step may be carried out in any suitable fashion. In some embodiments, the filtering step is carried out substantially as described in one or more of the examples discussed above.

In an example, the method comprises a step of transmitting the at least one input frame 1310b to the display component 1308.

It will be appreciated that while discussed in connection with the use of image classifications, a method utilising the above-mentioned additional analysis step could, in principle, equally well be implemented in the methods using image features in order to detect flagged content (e.g., the exemplary method described with reference to Figures 4 and 5 above)

A fifth exemplary method in accordance with the present invention will now be described with reference to Figure 14. For ease of comparison preceding Figures, elements of Figure 14 similar to corresponding elements of preceding Figures are labelled with reference signs similar to those used in respective Figures, but with prefix“14”. Furthermore, for purposes of conciseness and clarity, only features and elements that differ substantially from corresponding features and elements described above will be discussed in detail in the following.

In a first step 1410, at least one input frame is received from an image capture component in a manner substantially similar to that described above.

In a primary analysis step 1420, the at least one input frame is analysed. The at least one input frame may be analysed in any suitable fashion. In some

examples, the at least one input frame is analysed substantially as described in the above examples. In effect, the first analysis step is substantially similar or identical to any one of the analysis steps described in the foregoing.

Subsequent to the primary analysis step, in a secondary analysis step 1425, the at least one input frame is analysed. It should be noted that the terms“primary” and “secondary” are used purely for ease of describing the present method, and that the primary analysis step and the secondary analysis step are otherwise

functionally equivalent. In the secondary analysis step, the at least one input frame may be analysed in any suitable fashion. In some examples, the second analysis step is performed in substantially the same fashion as the first analysis step. In some examples, the second analysis step is performed in substantially a different fashion as the first analysis step. By utilising two different analysis methodologies, it becomes increasingly likely that any flagged content is correctly identified.

In a fourth step 1430, the at least one input frame is filtered if flagged content is detected in a manner substantially similar to that described with respect to the Figures above.

As described above, performing the primary analysis step and the secondary analysis step in sequence increases the likelihood that any and all flaggable content in a given input frame is correctly and positively identified, thereby improving the efficacy and efficiency of the exemplary method. Additionally, this increase in efficiency is accomplished without requiring more processing or system resources than the above-described methods.

A sixth exemplary method in accordance with the present invention will now be described with reference to Figure 15. For ease of comparison with Figure 14, elements of Figure 15 similar to corresponding elements of Figure 14 are labelled with reference signs similar to those used in Figure 14, but with prefixes“15” instead of“14”. Furthermore, for purposes of conciseness and clarity, only features and elements that differ substantially from corresponding features and elements described above will be discussed in detail in the following.

In a first step 1510, at least one input frame is received from an image capture component in a manner substantially similar to that described above.

In a primary analysis step 1520, the at least one input frame is analysed. The at least one input frame may be analysed in any suitable fashion. In some examples, the at least one input frame is analysed substantially as described in the above examples. In effect, the second step is substantially similar or identical to any one of the analysis steps described in the foregoing. In a secondary analysis step 1525, the at least one input frame is analysed. The at least one input frame may be analysed in any suitable fashion. In some examples, the second analysis step is performed in substantially the same fashion as the first analysis step. In some examples, the second analysis step is performed in substantially a different fashion as the first analysis step. By utilising two different analysis methodologies, it becomes increasingly likely that any flagged content is correctly identified.

Whilst described in sequence, the primary analysis step and the secondary analysis step are performed substantially simultaneously. This reduces the overall time required to perform the analysis steps, and may accordingly reduce the time taken between flaggable content being acquired and it being filtered.

In a fourth step 1530, the at least one input frame is filtered if flagged content is detected in a manner substantially similar to that described with respect to the Figures above.

In the above example, the primary analysis step and the secondary analysis step are both used to analyse the at least one input frame in order to detect the presence of flagged content. This results in the above-described advantages, i.e. , increased likelihood of accurately and positively detecting flaggable content.

However, in other examples, the primary analysis step is used to detect flagged content and the secondary analysis step is used to detect non-flagged content.

By detecting both flagged and non-flagged content, the respective probabilities may be compared to reduce the risk of so-called“false positives”. An exemplary implementation of a method will now be described with reference to Figure 16.

As described with reference to the various exemplary methods above, at least one input frame 1610 is provided in the present example. The at least one input frame is used in a primary analysis step 1620 and a secondary analysis step 1625. The primary and secondary analysis steps may be performed in any suitable fashion.

In some examples, both the primary and secondary analysis steps are performed using the same analysis methodology (for example such as described with reference to Figures 4 and 5 or such as described with reference to Figures 8 and 9 above). In other examples, the primary analysis step is performed using a first analysis methodology and the secondary analysis step is performed using another analysis methodology.

Purely by way of example, a situation will now be described in which both the primary analysis step and the secondary analysis step is carried out using the methodology described with reference to Figures 11 and 12 above. In this situation, if flagged content is detected having a first exemplary probability and non-flagged content is detected having a second exemplary probability, the exemplary probabilities can be directly compared to determine the likelihood of flagged content being present in the input frame. If the first exemplary probability is relatively larger than the second exemplary probability it can be concluded that there is a high probability of the input frame containing flagged content. By contrast, if the first exemplary probability is relatively smaller than the second exemplary probability, it can be concluded that, whilst the input frame may contain flagged content, there is a higher likelihood that the actual content of the input frame is non-flagged. In this situation, it may be that the flagged content has been misidentified (e.g., a“false positive”).

As described above, any of the analysis steps described above could be used to carry out the present example. In the present example, however, the primary analysis step comprises a primary additional analysis step 1621 and the secondary analysis step 1625 comprises an secondary additional analysis step 1626, in which both of the respective probabilities is compared with a respective exemplary threshold value.

In order to detect flagged content in an input frame, the first exemplary probability described above (which relates to potential flagged content) must be above a first exemplary threshold, and the second exemplary probability (which relates to potential non-flagged content) must be below a second exemplary threshold. In such a situation, the probability of the input frame containing flagged content is high, whereas the probability of the input frame containing non-flagged content is low. Accordingly, in a filtering step 1630 the input frame would be filtered as substantially described above.

However, if the first exemplary probability was above the first exemplary threshold and the second exemplary probability was also above the second exemplary threshold, the input frame would be determined to not contain flagged content. In such a situation, while the probability of flagged content is sufficiently high, it is also highly likely that the input frame contains non-flagged content (due to the second exemplary probability being above the second exemplary threshold).

If both of the first exemplary probability and the second exemplary probability are below their respective thresholds, or if the first exemplary probability is below its respective threshold and the second exemplary probability is not below its respective threshold, the input frame will not be filtered.

In the exemplary methods presented above, the individual method steps have been described in a sequential manner. It is to be appreciated that this is for purposes of conciseness and ease of explanation only. However, in some examples, some or all of the steps may be performed in a different order to that described above.

Furthermore, each of the method steps may be carried out in any suitable manner by any suitable component of either the mobile device or another device connected therewith. This is illustrated in and will be described in more detail with reference to Figure 17a and Figure 17b.

In Figure 17a, an exemplary implementation is illustrated that is substantially similar to that described in the above examples. In this method, an image capture component 1702 of a mobile device (not shown) captures an input frame 1710 and provides it to a central processing unit 1704. In the present example, the central processing unit is a processing component located in the mobile device, although it will be appreciated that the processing component could, in principle, equally well be located remotely from the mobile device. The central processing unit comprises one or more analysis components operable to perform one or more analysis steps substantially as described in the above examples. Purely for illustrative purposes, the central processing unit comprises a primary analysis component 1732 operable to perform a primary analysis step and a secondary analysis component 1734 operable to perform a secondary analysis step. On conclusion of the various exemplary analysis steps, the input frame is filtered if it contains flagged content, and is subsequently transmitted to the display

component 1708.

In Figure 17b, a second exemplary implementation is illustrated. In this

implementation, the image capture component 1702 captures an input frame 1710 as described above.

The input frame is substantially simultaneously transmitted to the analysis component 1704 and the display component 1708. In some examples, the display component displays the input frame to the user.

Purely as an example, the central processing unit comprises a primary analysis component 1732 operable to perform a primary analysis step according to one or more of the method steps described above and a secondary analysis component 1734 operable to perform a secondary analysis step according to one or more of the method steps described above.

If flagged content is discovered in an input frame, a notification is sent to a filtering component 1736. The filtering component is operable to implement a filtering step, e.g., as described in any of the methods above, on receipt of a notification from the central processing unit.

For purposes of simplicity and conciseness in the above examples, it has generally been assumed that the image capture component of the mobile device is an optical imaging component located in or connected directly to the mobile device, such as (but not limited to): an onboard camera or image sensor; a camera device connected to the mobile device by a suitable wired or wireless connection (e.g., USB, WiFi, Bluetooth, ZigBee, Firewire, DVI or any other non proprietary or proprietary connection). Flowever, it will be appreciated that the image capture component could, in principle, be a remotely located imaging device or unit that may, or may not, be controlled by the user of the mobile device. Examples of such remote imaging device include (without limitation): drones;

remote controlled vehicles; security cameras; other mobile devices; or other imaging or video devices. Additionally, it is to be noted that the image capture component does not have to be implemented as a hardware component, but that the image capture component may, in principle, be implemented entirely by way of software. In some examples, the image capture component is a social media or other connectivity or communication application installed on or otherwise implemented in the mobile device. In other examples, the image capture component is a control application intended to control one or more hardware imaging sensors (e.g., cameras) that may be located in or connected to the mobile device.

Furthermore, in the above examples, the analysis and filtering steps have been performed on live input frames (i.e. , input frames that are shown substantially simultaneously with their capture). The display component is typically used as a viewfinder for the image capture component, either for capturing video or prior to capturing static images. When capturing live images, the system functions substantially as described above.

However, it is under certain circumstances possible that an image that has already been captured could be received by the image capture component (e.g., if a social media application receives an image from a third party or other user). It will be appreciated that the above-described exemplary methods, although described predominantly in terms of“live images”, could equally well be implemented for images or media that have been previously captured and subsequently received by the image capture component.

Various embodiments are described herein with reference to block diagrams or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).

These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks.

A tangible, non-transitory, computer-readable medium may include an electronic, magnetic, optical, electromagnetic, or semiconductor data storage system, apparatus or device. More specific examples of the computer-readable medium would include the following: a portable computer diskette, a random access memory (RAM) circuit, a read-only memory (ROM) circuit, an erasable

programmable read-only memory (EPROM or Flash memory) circuit, a portable compact disc read-only memory (CD-ROM) or a portable digital versatile/video disc read-only memory (DVD/Blu-ray).

The computer program instructions may also be loaded onto a computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer an/or other programmable apparatus to produce a computer- implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.

Accordingly, the invention may be embodied in hardware and/or in software (including firmware, resident software, micro-code etc.) that runs on a processor, which may collectively be referred to as“circuitry”,“a module” or variants thereof.

It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For examples, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated.

The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein and without limitation to the scope of the claims. The applicant indicates that aspects of the invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.

While specific embodiments of the invention have been described above, it will be appreciated that an embodiment of the invention may be practiced otherwise than as described. For example, an embodiment of the invention may take the form of a computer program containing one or more sequences of machine-readable instructions describing a method as disclosed above, or a data storage medium (e.g. semiconductor memory, magnetic or optical disk) having such a computer program stored therein. The descriptions above are intended to be illustrative, not limiting. Thus, it will be apparent to one skilled in the art that modifications may be made to the invention as described without departing from the scope of the claims set out below.