Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR MONITORING ARTHROPOD VECTORS AND BUILDING PROJECTIVE MODELS
Document Type and Number:
WIPO Patent Application WO/2023/239794
Kind Code:
A1
Abstract:
Systems and methods for monitoring of arthropod vectors and building of projective or predictive models are provided. An image sensor can be used to capture an image of a pad within an arthropod trap. A software module can process the image, automatically detect a region of interest (ROI) and count the arthropods stuck thereon to generate image count data. The image count data can be used to build one or more models to help anticipate seasonal risk.

Inventors:
RAY ANANDASANKAR (US)
KOWALEWSKI JOEL (US)
Application Number:
PCT/US2023/024713
Publication Date:
December 14, 2023
Filing Date:
June 07, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SENSORYGEN INC (US)
UNIV CALIFORNIA (US)
International Classes:
A01M1/14; A01M1/10; G06M11/00; G06T1/00; G06T7/13; H04N7/18
Foreign References:
JP2020099231A2020-07-02
CN113273555A2021-08-20
CN111462143A2020-07-28
US20210378225A12021-12-09
US20210000097A12021-01-07
Attorney, Agent or Firm:
FRANK, Louis, C. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for monitoring arthropod vectors and building at least one projective model, the system comprising: an arthropod trap; an image sensor capable of capturing an image of an inside of the arthropod trap; a processor; and a machine-readable medium in operable communication with the processor and having instructions stored thereon that, when executed by the processor, perform the following steps: i) receive an image of the inside of the arthropod trap; ii) detect an ROI in the image; iii) determine image count data comprising a count of arthropods in the ROI in the image; and iv) generate the at least one projective model using the image count data, wherein the at least one projective model helps anticipate daily, weekly, and/or seasonal arthropod vector risk.

2. The system according to claim 1, wherein the determining of the image count data comprises: detecting outlines of arthropods in the ROI in the image of the inside of the arthropod trap; and filling in and counting the outlines of the arthropods in the ROI in the image of the inside of the arthropod trap to generate first count data comprising a quantity of arthropods in the ROI in the image of the inside of the arthropod trap.

3. The system according to claim 2, wherein the image count data comprises the first count data.

4. The system according to any of claims 1-3, wherein the arthropod trap comprises an adhesive pad.

5. The system according to claim 4, wherein the image of the inside of the arthropod trap is an image comprising the adhesive pad, and wherein the ROI in the image includes the adhesive pad.

6. The system according to any of claims 4-5, wherein the adhesive pad comprises a lure for attracting arthropods.

7. The system according to any of claims 1-6, wherein the determining of the image count data comprises: using a supervised machine learning model to predict coordinates of arthropods in the image of the inside of the arthropod trap to generate second count data comprising a count of arthropods in the image of the inside of the arthropod trap, wherein the image count data comprises the second count data, wherein the supervised machine learning model is pre-trained on prior images of the inside of the arthropod trap and prior coordinates of arthropods in the prior images.

8. The system according to any of claims 1-7, further comprising a mobile device, wherein the mobile device comprises the image sensor and/or an image capturing device.

9. The system according to claim 8, wherein the mobile device is a smartphone or a smart tablet.

10. The system according to any of claims 8-9, further comprising a mobile application installed on the mobile device and configured such that a user of the mobile device can capture the image of the inside of the arthropod trap, upload the image of the inside of the arthropod trap to a remote server, and receive and display on the mobile device data of the at least one predictive model.

11. The system according to claim 10, wherein the remote server is a cloud-based server.

12. The system according to any of claims 10-11, wherein the remote server receives count data of arthropods in traps from a plurality of different sources in a plurality of geographic regions and generates the at least one projective model based on the count data from the plurality of different sources.

13. The system according to embodiment 12, wherein the remote server generates a ranking of users based on user participation in providing images and/or other data to the remote server.

14. The system according to any of claims 1-13, wherein the instructions when executed further perform the following step: identify species of the arthropods in the ROI in the image of the inside of the arthropod trap using a convolutional neural network (CNN) that is pretrained on specific species of arthropods and comprises labels for the specific species of arthropods.

15. The system according to any of claims 1-14, wherein the instructions when executed further perform the following step: provide a recommendation, to a user of the system, of at least one product that is lethal and/or repellant to arthropods, based on image count data and a geographic location of the user, using a recommendation algorithm.

16. The system according to claim 15, wherein the at least one product comprises a spray or an adhesive pad.

17. The system according to any of claims 1-16, wherein the at least one projective model comprises an autoregressive moving average (ARMA), an autoregressive integrated moving average (ARIMA), a supervised machine learning model, and/or an unsupervised machine learning model.

18. The system according to any of claims 1-17, wherein the at least one projective model comprises a Decision Tree (DT), a recurrent neural network (RNN), a Random Forest (RF), and/or Gradient Boosting Machines (GBMs).

19. The system according to any of claims 1-18, wherein the instructions when executed further perform the following step: display a ranking of users based on participation or engagement with an application that also displays the at least one projective model (e.g., the mobile application and/or the secure application).

20. The system according to any of claims 1-19, further comprising a secure application for use via a web browser, wherein the secure application is configured such that a user of the secure application can upload the image of the inside of the arthropod trap to a remote server, and receive and display, on a computing device running the secure application, data of the at least one predictive model.

21. The system according to any of claims 4-20, further comprising a secure application for use via a web browser, wherein the secure application is configured such that a user of the secure application can upload an image of the adhesive pad to a remote server, and receive and display, on a computing device running the secure application, data of the at least one predictive model.

22. The system according to claim 21, wherein the image of the adhesive pad is taken inside the arthropod trap.

23. The system according to claim 21, wherein the image of the adhesive pad is taken outside the arthropod trap.

24. The system according to any of claims 1-23, wherein the image sensor is a camera disposed within the arthropod trap.

25. The system according to any of claims 1-24, wherein the arthropods are insects.

26. The system according to any of claims 1-25, wherein the arthropods are mosquitoes.

27. The system according to any of claims 1-25, wherein the arthropods are agricultural pests.

28. The system according to any of claims 1-27, wherein the generating of the at least one projective model comprises using an algorithm specialized for use in a geographic region where an arthropod-borne illness is endemic.

29. The system according to claims 1-28, wherein the image sensor is directed towards a collection tray of the arthropod trap.

30. The system according to any of claims 1-29, wherein the arthropod trap comprises a surface with a color that contrasts with arthropods collected in the arthropod trap.

31. The system according to any of claims 1-30, wherein the image sensor is capable of capturing an image of arthropods stuck on an adhesive surface disposed inside the arthropod trap and/or of arthropods captured in the arthropod trap and emptied onto a surface disposed in the arthropod trap.

32. The system according to any of claims 1-31, wherein the image of the inside of the arthropod trap received in step i) is an image of arthropods stuck on an adhesive surface disposed inside the arthropod trap and/or of arthropods captured in the arthropod trap and emptied onto a surface disposed in the arthropod trap.

33. A method for monitoring arthropod vectors and building at least one projective model, the method comprising using the system according to any of claims 1-32.

Description:
DESCRIPTION

SYSTEMS AND METHODS FOR MONITORING ARTHROPOD VECTORS AND BUILDING PROIECTIVE MODELS

CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application Serial No. 63/365,984, filed une 7, 2022, the disclosure of which is hereby incorporated by reference in its entirety, including all figures, tables, and drawings.

BACKGROUND

Mosquitoes and other arthropods are pests and can carry diseases, harm crops, and generally be a nuisance. As such, traps are often used to catch and/or exterminate arthropods. In addition, insect repellant can be worn and/or sprayed in the air to try to keep insects away.

BRIEF SUMMARY

Embodiments of the subject invention provide novel and advantageous systems and methods for monitoring of arthropod (e.g., insects such as mosquito) vectors and building of projective or predictive models (e.g., to prevent or inhibit outbreaks of vector borne illnesses). A sensor (e.g., an image sensor such as a smart device (e.g., smartphone or tablet) camera or standalone camera (e.g., wireless camera)) can be used to capture an image of the inside of an arthropod trap (e.g., a tray or adhesive pad within the arthropod trap). A software module can process the image, automatically detect a region of interest (ROI) (e.g., the adhesive pad or tray within the trap) and count the arthropods stuck thereon to generate image count data. The image count data can be used to build one or more models to help anticipate seasonal risk. Image count data can be pooled form multiple different users whose smart devices are connected with one another and/or with a remote server (e.g., a cloud-based server). The software module can also include a recommendation algorithm to suggest to users products that are relevant within a geographical region, based on the image count data.

In an embodiment, a system for monitoring arthropod vectors and building at least one projective model can comprise: an arthropod trap; an image sensor capable of capturing (and/or configured to capture) an image of the inside of the arthropod trap (and/or arthropods stuck on an adhesive surface disposed in the arthropod trap, and/or arthropods captured in the trap and emptied onto a surface disposed in the arthropod trap); a processor; and a (non-transitory) machine-readable medium in operable communication with the processor and having instructions stored thereon that, when executed by the processor, perform the following steps: i) receive an image of the inside of the arthropod trap (and/or arthropods struck/captured on the adhesive surface disposed in the arthropod trap, and/or arthropods emptied onto the surface disposed in the arthropod trap; ii) detect an ROI in the image; iii) determine image count data comprising a count of arthropods in the ROI in the image; and iv) generate the at least one projective model using the image count data, wherein the at least one projective model helps anticipate daily, weekly, and/or seasonal arthropod vector risk. The determining of the image count data can comprise: detecting outlines of arthropods in the ROI in the image of the inside of the arthropod trap; and filling in and counting the outlines of the arthropods in the ROI in the image of the inside of the arthropod trap to generate first count data comprising a quantity of arthropods in the ROI in the image of the inside of the arthropod trap. The image count data can comprise the first count data. The arthropod trap can comprise an adhesive pad. The image of the inside of the arthropod trap can be an image comprising the adhesive pad, and the ROI in the image can include the adhesive pad. The adhesive pad can comprise a lure for attracting arthropods. The determining of the image count data can comprise: using a supervised machine learning model to predict coordinates of arthropods in the image of the inside of the arthropod trap to generate second count data comprising a count of arthropods in the image of the inside of the arthropod trap, wherein the image count data comprises the second count data, wherein the supervised machine learning model is pre-trained on prior images of the inside of the arthropod trap and prior coordinates of arthropods in the prior images. The system can further comprise a mobile device, and the mobile device can comprise the image sensor (and/or an image capturing device). The mobile device can be, for example, a smartphone or a smart tablet. The system can further comprise a mobile application installed on the mobile device and configured such that a user of the mobile device can capture the image of the inside of the arthropod trap, upload the image of the inside of the arthropod trap to a remote server, and receive and display on the mobile device data of the at least one predictive model. The remote server can be a cloud-based server. The remote server can receive count data of arthropods in traps from a plurality of different sources in a plurality of geographic regions and generate the at least one projective model based on the count data from the plurality of different sources. The remote server can generate a ranking of users based on user participation in providing images and/or other data to the remote server. The instructions when executed can further perform the following step: identify species of the arthropods in the ROI in the image of the inside of the arthropod trap using a convolutional neural network (CNN) that is pretrained on specific species of arthropods and comprises labels for the specific species of arthropods. The instructions when executed can further perform the following step: provide a recommendation, to a user of the system, of at least one product that is lethal and/or repellant to arthropods, based on image count data and a geographic location of the user, using a recommendation algorithm. The at least one product can comprise, for example, a spray and/or an adhesive pad. The at least one projective model can comprise an autoregressive moving average (ARMA), an autoregressive integrated moving average (ARIMA), a supervised machine learning model, and/or an unsupervised machine learning model. The at least one projective model can comprise a Decision Tree (DT), a recurrent neural network (RNN), a Random Forest (RF), and/or Gradient Boosting Machines (GBMs). The instructions when executed can further perform the following step: display a ranking of users based on participation or engagement with an application that also displays the at least one projective model (e.g., the mobile application and/or the secure application). The system can further comprise a secure application for use via a web browser, wherein the secure application is configured such that a user of the secure application can upload the image of the inside of the arthropod trap (and/or an image of the adhesive pad taken either inside or outside the arthropod trap) to a remote server, and receive and display, on a computing device running the secure application, data of the at least one predictive model. The image sensor can be a camera disposed within the arthropod trap. The arthropods can be insects (e.g., mosquitoes) and/or agricultural pests. The generating of the at least one projective model can comprise using an algorithm specialized for use in a geographic region where an arthropod-borne illness is endemic. The image sensor can be directed towards a collection tray of the arthropod trap. The arthropod trap can comprise a surface with a color that contrasts with arthropods collected in the arthropod trap.

In another embodiment, a method for monitoring arthropod vectors and building at least one projective model can comprise using a system as disclosed herein.

BRIEF DESCRIPTION OF DRAWINGS

Figure 1 shows a schematic view of two methods for detecting arthropods on a pad of a trap, according to an embodiment of the subject invention. In the first, traditional methods can be applied to detect outlines in a region of interest (ROI). The ROI can be detected beforehand, for example using an automated object recognition approach. The outlines can be filled in and counted. In the second, a supervised machine learning model, which can be trained on images and the coordinates of arthropods therein, can predict the coordinates of prospective arthropods in new/captured images (i.e., not the image used to train the model, but rather newly-captured images for analysis).

Figure 2 shows a schematic view of a pipeline of analysis of arthropods on a pad of a trap, according to an embodiment of the subject invention.

Figure 3A shows an image of arthropods on a collection tray of a trap. An application can identify the tray having arthropods trapped thereon, according to an embodiment of the subject invention.

Figure 3B shows a closeup view of arthropods on a collection tray of a trap, being automatically detected and counted in a static image (e.g., by the application), according to an embodiment of the subject invention. In this case, nine mosquitoes have been counted in the frame.

Figures 4A-4C show images of a screenshot of a mobile application that can be used with embodiments of the subject invention. Figure 4A shows a user ready to take a photograph of a collection tray of a trap using a smart device (e.g., a smartphone). Figure 4B shows the application displaying a count of the arthropods on the tray of the trap. Figure 4C shows the application displaying a history of counts of arthropods. The history can show individual user counts, counts of a larger community of users, or both. The application offers a method for arthropod (e.g., insect such as mosquito) vector monitoring and building of projective models to prevent or inhibit outbreaks of vector borne illnesses.

DETAILED DESCRIPTION

Embodiments of the subject invention provide novel and advantageous systems and methods for monitoring of arthropod (e.g., insects such as mosquito) vectors and building of projective or predictive models (e.g., to prevent or inhibit outbreaks of vector borne illnesses). A sensor (e.g., an image sensor such as a smart device (e.g., smartphone or tablet) camera or standalone camera (e.g., wireless camera)) can be used to capture an image of a tray or pad within an arthropod trap. A software module can process the image, automatically detect a region of interest (ROI) (e.g., the adhesive pad or tray within the trap) and count the arthropods stuck thereon to generate image count data. The image count data can be used to build one or more models to help anticipate seasonal risk. Image count data can be pooled form multiple different users whose smart devices are connected with one another and/or with a remote server (e.g., a cloud-based server). The software module can also include a recommendation algorithm to suggest to users products that are relevant within a geographical region, based on the image count data.

An arthropod trap can include a pad or tray (e.g., a disposal sticky pad or tray), which can include a lure to attract and then catch arthropods, for use in a hanging or tabletop trap. In some embodiments, the pad can be treated with one or more human-safe, ecologically tolerant repellent or insecticidal compound. Each pad can be developed for use with a mobile application that enables detailed monitoring and analysis of trapped arthropods. Images displaying trapped arthropods on the pad, taken with a camera from a smart device (e.g., a smartphone or tablet) or a camera mounted inside or near a trap, can be processed locally within a mobile application on a smart device or a remote server (e.g., cloud-based server) and returned to the smart device. The images can be taken by end users or can be automatically taken (e.g., using a camera (e.g., a wireless camera) mounted inside a trap). The traps can be used in personal, home, industrial, and/or agricultural settings. The mobile application, in addition to an interface for aligning and generating images of the sticky pad with trapped arthropods, can include historical count data for an individual user as well as a global userbase. Data collected can update forecasting models that anticipate global and local seasonal risk for vector-borne illnesses.

In order to facilitate crowdsourcing and citizen science, the mobile application can include a ranking (e.g., an updated ranking such as a continuously updated ranking) of users according to their contribution to the project of keeping communities safe and free of arthropod vectors. For example, a ranking of users by the number of images taken and uploaded to the remote server using the mobile application can be included and accessible within the mobile application and/or within social media (e.g., Twitter®, Instagram®, Facebook®/Meta®). In uses where the data are collected autonomously (e.g., via static cameras that can be set to capture images at regular intervals and/or when arthropods are detected; this may be more likely in industrial and agricultural uses), the image processing aspect can be performed with or without the mobile application. For example, the image data can be uploaded to a remote server (e.g., cloud-based server) using a secure application accessible from a web browser, and results and reports for the images can be viewed in the secure application from the remote server (e.g., cloud-based server) that is accessible from a web browser of choice. This can be the same remote server that is used for the mobile application for other users (i.e., the image data uploaded via the secure application can be combined with the image data uploaded via the mobile application on smart devices to improve the modeling).

A trap can include an adhesive pad or tray that can be adjusted to accommodate hanging traps and tabletop traps of various designs and models. In some embodiments, the trap can be treated with an insecticidal compound such as a pyrethroid or an attractive compound to enhance the functionality of an existing trap.

Once an image of a trap is captured, it can be processed locally (via the mobile application on a smart device or via a secure application run on a web browser) and/or can be uploaded to a remote server (e.g., a cloud-based server) where processing can be done. The approach for counting arthropods in the trap can involve two complementary methods that are used to enhance accuracy. Figure 1 shows a schematic view of the overall process including the two complementary methods. In the first of the complementary methods, outlines in an ROI can be detected (e.g., using traditional methods). The ROI can be detected beforehand using an automated object recognition approach. The outlines can then be filled-in and counted. In the second of the complementary methods, a supervised machine learning model (e.g., a convolutional neural network (CNN)), which can be trained on images and the coordinates of arthropods (e.g., insects such as mosquitoes) in the training images, can predict the coordinates of prospective arthropods in the captured images (i.e., the newly- captured images that are not used to train the model).

In many embodiments, a mobile application is used by end users for capturing images, uploading images and/or data (e.g., image data and/or count data), and viewing count information. Figure 4A shows a view of the camera interface, which can include a window to properly frame the sticky/adhesive pad and capture images of trapped arthropods. The processed images can be returned inside this tab, showing raw images of the arthropods that have been counted and, if desired, probabilities of species in the image (see also Figures 2, 3 A, 3B, and 4B). Figure 4C shows a view of a display of history of counts of arthropods. The history can show individual user counts, counts of a larger community of users, or both. Counts arthropods can be plotted with respect to time, broken into personal counts for the user and global/community counts with maps of relevance to the user's location (see also Figure 2). The interface can include functionality to share results (e.g., via social media or email, such as via Twitter, Instagram, Facebook, Gmail, Outlook, etc.), as seen in the lower portion of the display in Figure 4C. The mobile application can also show community rankings and/or product recommendations along with a portal for purchasing new products such as adhesive pads, repellents insecticides, and/or add-ons to traps to improve their efficacy based on the real time data stream from the global userbase (see also Figure 2). The secure application, which can be used instead of (or in some cases in addition to) the mobile application can include these displays and interfaces as well. Though, the camera interface shown in Figure 4A may not be necessary in many cases.

In some embodiments, images can be processed remotely on a remote server (e.g., a cloudbased server). In some embodiments, however, the images can be processed in the application (mobile application or secure application) using machine learning frameworks (e.g., machine learning frameworks specialized for iOS® and/or Android®. The image processing can include automatically detecting a ROI (e.g., the adhesive pad) and counting arthropods trapped thereon.

The systems and methods of embodiments of the subject invention can include predictive modeling from trap data. Image data collected from traps can provide a running count of arthropods across seasons and by region (based on the location of the end users providing the image data). Count data may be coupled with incidences, and time series models can be built to anticipate seasonal risk (e.g., in regions with endemic vector-borne illnesses). The built models can be autoregressive moving average (ARMA) and/or autoregressive integrated moving average (ARIMA) supervised (or unsupervised) machine learning models, such as Decision Trees (DTs) and/or extensions of DTs, such as Random Forest (RF) and Gradient Boosting Machines (GBMs), which can be built using DT s as the base learning algorithm. Additionally, recurrent neural networks (RNNs) can be used in the modeling. Collectively, the models offer a consensus forecast for arthropod vector monitoring.

The systems and methods of embodiments of the subject invention can include species identification. A CNN can be trained on diverse images of arthropods with labels for the specific species to learn a mapping between morphological features of arthropods and a species label, such that images lacking expert annotation may be processed (e.g., locally or on the remote server) and retuned with probabilities of different species contained in the image. In some embodiments, this CNN can be built into the mobile application (or secure application) and can run on the local operating system (e.g., iOS, Android, Windows) rather than remotely. This can offer a performance advantage in terms of computing time and processing time.

The systems and methods of embodiments of the subject invention can include a recommendation engine. The vector/arthropod monitoring can be tied to a recommendation algorithm that suggests products that are most relevant within in a region (based on the arthropod count data). The user interaction with these recommendations may in turn be used as a training set for Artificial Intelligence (Al) or Machine Learning (ML) models, adjusting product rankings according to user likes or ratings. Example recommendations include sprays with high insecticidal or repellent activity against a specific species that is prevalent in the region, according to the monitoring data. Another example is lured adhesive pads to be inserted into traps that are highly efficacious against relevant species, given the monitoring data.

Embodiments of the subject invention include glue or adhesive pads adaptable for use in existing hanging and/or tabletop traps for killing or catching arthropods (e.g., insects such as mosquitoes), as well as applications (e.g., web-based applications, mobile applications, and/or secure application) to analyze arthropods trapped on the adhesive pad inserted in the trap. The trap can include an attractive compound (e.g., lure) and/or an insecticidal agent, such as a pyrethroid. The application can include a tab to take pictures of the trap, aligning a camera to capture images of the adhesive pad from a given trap, returning daily and monthly arthropod counts as well as arthropod detection to the user of the application. The analysis of the image(s) can be done remotely in a cloud-computing environment (e.g., on a remote server in operable communication (e.g., over the internet) with the device(s) used by the end user(s)), relaying the results back to the end user device. Alternatively, the image analysis can be done on the device used by the end user(s). The application can include one or more machine learning models incorporated therein to perform the analysis (instead of being done in a cloud-computing environment. The application can include a tab for ranking, awards, and/or social media based on user participation. The application can also include global and/or local (e.g., county) density maps of arthropod counts, and the density maps can vary according to the location (e.g., global positioning satellite (GPS) location) of the mobile device of the end user (see also Figure 2 and Figure 4C). The application can include a tab for relevant product recommendations, based on a continuously updated algorithm (e.g., an artificial intelligence (Al) algorithm) using arthropod counts and/or weather conditions to rank repellent, trap, and/or insecticidal products (and/or other associated components) to broadly optimize arthropod vector control or, more specifically, personal protection. This collection of recommendations can reflect or be based on the risks for a given location (e.g., GPS location) and/or mitigation strategies as well as user likes and ratings of these products. The application can include advanced, specialized algorithms for use in global markets where vector-borne illnesses such as malaria are endemic. These can be based on historical factors and fluctuations in the incidence of reported cases.

Embodiments of the subject invention provide a focused technical solution to the focused technical problem of addressing arthropod infestations and how best to deal with them. Embodiments of the subject invention also improve the computer system (e.g., mobile devices running a mobile application and/or static computer devices running a secure application) on which the software module is running because the image analysis can be run in an efficient manner, thereby conserving computing resources, and also because the software module can provide predictive models for arthropod infestation and also provide recommendations for products to address the same (i.e., turning the mobile device and/or static computer device into a device capable of predicting arthropod infestation patterns and addressing the same).

The methods and processes described herein can be embodied as code and/or data. The software code and data described herein can be stored on one or more machine-readable media (e.g., computer-readable media), which may include any device or medium that can store code and/or data for use by a computer system. When a computer system and/or processor reads and executes the code and/or data stored on a computer-readable medium, the computer system and/or processor performs the methods and processes embodied as data structures and code stored within the computer-readable storage medium.

It should be appreciated by those skilled in the art that computer-readable media include removable and non-removable structures/devices that can be used for storage of information, such as computer-readable instructions, data structures, program modules, and other data used by a computing system/environment. A computer-readable medium includes, but is not limited to, volatile memory such as random access memories (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only-memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM), and magnetic and optical storage devices (hard drives, magnetic tape, CDs, DVDs); network devices; or other media now known or later developed that are capable of storing computer-readable information/data. Computer-readable media should not be construed or interpreted to include any propagating signals. A computer-readable medium of embodiments of the subject invention can be, for example, a compact disc (CD), digital video disc (DVD), flash memory device, volatile memory, or a hard disk drive (HDD), such as an external HDD or the HDD of a computing device, though embodiments are not limited thereto. A computing device can be, for example, a laptop computer, desktop computer, server, cell phone, or tablet, though embodiments are not limited thereto.

When ranges are used herein, such as for dose ranges, combinations and subcombinations of ranges (e.g., subranges within the disclosed range), specific embodiments therein are intended to be explicitly included. When the term “about” is used herein, in conjunction with a numerical value, it is understood that the value can be in a range of 95% of the value to 105% of the value, i.e. the value can be +/- 5% of the stated value. For example, “about 1 kg” means from 0.95 kg to 1.05 kg.

A greater understanding of the embodiments of the subject invention and of their many advantages may be had from the following examples, given by way of illustration. The following examples are illustrative of some of the methods, applications, embodiments, and variants of the present invention. They are, of course, not to be considered as limiting the invention. Numerous changes and modifications can be made with respect to embodiments of the invention.

EXAMPLE 1

An image of a trap was taken within a mobile application as described herein, using an iPhone®. The snap of the image is captured in Figure 4A. The mobile application identified the adhesive pad as an ROI, as seen in Figure 3 A. The mobile application then analyzed the region of interest and identified nine mosquitoes, as seen in Figure 3B. Figure 4B shows the identification of the ROI and the count of the mosquitoes within the mobile application.

It should be understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application.

All patents, patent applications, provisional applications, and publications referred to or cited herein are incorporated by reference in their entirety, including all figures and tables, to the extent they are not inconsistent with the explicit teachings of this specification.