Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR CLASSIFYING ANIMAL CARCASS
Document Type and Number:
WIPO Patent Application WO/2018/186796
Kind Code:
A1
Abstract:
Disclosed is a method for classifying an animal carcass. The method comprises capturing at least one image of the animal carcass, using at least one imaging device; sending the captured at least one image to a server arrangement; and processing the at least one image to determine a class of the animal carcass.

Inventors:
KONRADSSON, Ove (Hälsingsgården, Falköping, 521 96, SE)
EKE-GÖRANSSON, Per (Älvestad 301, Vadstena, 592 92, SE)
LILJA, Matts (Jaktfalksgatan 67, Helsingborg, 254 49, SE)
BJÖRK, Helena (Mariehällsvägen 9c, Helsingborg, 254 50, SE)
Application Number:
SE2018/050358
Publication Date:
October 11, 2018
Filing Date:
April 05, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SMART AGRITECH OF SWEDEN AB (Hälsingsgården, Falköping, 521 96, SE)
International Classes:
A22B5/00
Foreign References:
US5194036A1993-03-16
EP0321981A11989-06-28
GB2247524A1992-03-04
Other References:
GOYACHE F ET AL: "The usefulness of artificial intelligence techniques to assess subjective quality of products in the food industry", TRENDS IN FOOD SCIENCE AND TECHNO, ELSEVIER SCIENCE PUBLISHERS, GB, vol. 12, no. 10, 1 October 2001 (2001-10-01), pages 370 - 381, XP004369702, ISSN: 0924-2244, DOI: 10.1016/S0924-2244(02)00010-9
OLIVER A ET AL: "Predicting meat yields and commercial meat cuts from carcasses of young bulls of Spanish breeds by the SEUROP method and an image analysis system", MEAT SCIENCE, ELSEVIER SCIENCE, GB, vol. 84, no. 4, 1 April 2010 (2010-04-01), pages 628 - 633, XP026907423, ISSN: 0309-1740, [retrieved on 20091028], DOI: 10.1016/J.MEATSCI.2009.10.022
PAUL ALLEN ET AL: "OBJECTIVE BEEF CARCASS CLASSIFICATION A REPORT OF A TRIAL OF THREE VIA CLASSIFICATION SYSTEMS", 1 May 2000 (2000-05-01), XP055271605, Retrieved from the Internet [retrieved on 20160510]
None
Attorney, Agent or Firm:
GIPON KONSULT AB (Brynjevägen 12, Löddeköpinge, 246 34, SE)
Download PDF:
Claims:
CLAI MS

1. A method for classifying an animal carcass, the method comprising:

- capturing at least one image of the animal carcass, using at least one imaging device;

- sending the captured at least one image to a server arrangement; and

- processing the at least one image to determine a class of the animal carcass comprising comparing the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass wherein the class of the animal carcass is determined by combining grades corresponding to different attributes of animal carcass.

2. A method according to claim 1, further comprising tagging the animal carcass to assist classification thereof by attaching a tag to the animal carcass, the tag comprising information associated with the animal.

3. A method according to claim 2, further comprising detecting the tag, using the at least one imaging device.

4. A method according to any of the preceding claims, wherein the animal carcass is cut longitudinally and/or transversally.

5. A method according to any of the preceding claims, further comprising processing a captured at least a pair of images to form at least one three-dimensional image, using a computing device.

6. A method according to claim 5, further comprising sending the at least one three-dimensional image to the server arrangement.

7. A method according to claim 1, wherein the classification system is one of: European Union EUROP grid classification system, United States Department of Agriculture grading system, South African Meat Industry Company meat classification system.

8. A method according to any of the claims 2-7, further comprising associating the information in the tag with the determined class of the animal carcass.

9. A method according to any of the preceding claims, further comprising receiving the determined class of the animal carcass from the server arrangement.

10. A method according to any of the preceding claims, further comprising capturing at least one image of an animal prior to slaughtering of the animal, for identifying the information associated with the animal.

11. A method according to claim 10, wherein identifying the information associated with the animal comprises processing the captured at least one image of the animal.

12. A system for classifying an animal carcass, the system comprising:

- at least one imaging device operable to capture at least one image of the animal carcass; and

- a server arrangement communicably coupled to the at least one imaging device via a network, the server arrangement operable to receive and process the captured at least one image to determine a class of the animal carcass,

wherein the server arrangement is operable to compare the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass wherein the class of the animal carcass is determined by combining grades corresponding to different attributes of animal carcass.

13. A system according to claim 12, wherein the system further comprises a tag attached to the animal carcass, the tag comprising information associated with the animal.

14. A system according to claim 13, wherein the at least one imaging device is operable to detect the tag.

15. A system according to any of the claims 12-14, further comprising a computing device communicably coupled to the at least one imaging device and the server arrangement.

16. A system according to claim 15, wherein the computing device is operable to:

- receive the captured at least one image from the at least one imaging device; and

- send the received at least one image to the server arrangement.

17. A system according to any of the claims 12-16, wherein the at least one imaging device comprises: a three-dimensional camera or at least one two-dimensional camera.

18. A system according to claim 17, wherein the at least one two- dimensional camera is operable to capture at least a pair of images of the animal carcass, and the computing device is operable to process the captured at least the pair of images to form at least one three- dimensional image.

19. A system according to claim 1, wherein the computing device is further operable to send the at least one three-dimensional image to the server arrangement.

20. A system according to claim 12, wherein the classification system is one of: European Union EUROP grid classification system, United States Department of Agriculture grading system or South African Meat Industry Company meat classification system. 21. A system according to any of the claims 13, wherein the server arrangement comprises a database for storing the pre-defined template of the classification system and information in the tag associated with the animal.

22. A system according to any of the claims 13-21, wherein the computing device is operable to associate the information in the tag with the determined class of the animal carcass.

23. A system according to any of the claims 12-22, wherein the computing device is operable to receive the determined class of the animal carcass from the server arrangement. 24. A system according to any of the claims 12-23, wherein the at least one imaging device is operable to capture at least one image of an animal prior to slaughtering of the animal for identifying information associated with the animal.

25. A system according to claim 24, wherein the server arrangement is operable to process the captured at least one image of the animal for identifying the information associated with the animal.

26. A system according to any of the claims 12-25, wherein the at least one imaging device is handheld by an operator and/or mounted on a slaughter line.

Description:
METHOD AND SYSTEM FOR CLASSIFYING ANIMAL CARCASS

TECHNICAL FIELD

The present disclosure relates generally to livestock farming; and more specifically, to a method and system for classifying an animal carcass.

BACKGROUND

In recent times, there have been significant advancements in practices for livestock farming. Consequently, animal farmers are raising large numbers of animals for providing commodities such as meat, fibre, and so forth, for use by humans. Therefore, the animals are raised and eventually slaughtered to provide such commodities. Specifically, the animals are slaughtered into animal carcasses. However, several factors such as age of an animal, environment in which the animal was raised, diet of the animal, breed of the animal, and so forth, influence quality of carcass of the animal. Therefore, the animal carcasses need to be correctly analysed to ensure quality and correct pricing of animal products associated therewith.

However, there are drawbacks associated with existing methods for evaluation and classification of the animal carcasses. Conventionally, the evaluation and classification of the animal carcasses is performed via manual inspection of the animal carcasses by certified personnel. However, such manual inspection is time-consuming and cumbersome. Further the animal carcasses have to be monitored carefully to identify any deformations or birth defects present in the animal carcass. Since the manual inspection for classifying the animal carcass is a subjective with regards to skills of the certified personnel, it is prone to errors and is unreliable. Further, the manual inspection of animal carcasses does not provide a detailed estimation of the estimation of the amount of fat and muscle in the animal carcass. In such instances, an inaccurate classification of the animal carcasses may lead to incorrect pricing and may decrease profit margins of animal farmers. Subsequently, an inaccurate classification of the animal carcass may not be as per requirements of buyers.

Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with classification of the animal carcasses.

SUMMARY The present disclosure seeks to provide a method for classifying an animal carcass.

The present disclosure also seeks to provide a system for classifying an animal carcass.

In one aspect, an embodiment of the present disclosure provides a method for classifying an animal carcass, the method comprising:

- capturing at least one image of the animal carcass, using at least one imaging device;

- sending the captured at least one image to a server arrangement; and

- processing the at least one image to determine a class of the animal carcass comprising comparing the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass wherein the class of the animal carcass is determined by combining grades corresponding to different attributes of animal carcass. The present disclosure seeks to provide a solution to the existing problems associated with classification of animal carcasses. The described method for classifying an animal carcass eliminates manual inspection and is therefore accurate and reliable. Optionally, the method comprises tagging the animal carcass to assist classification thereof by attaching a tag to the animal carcass, the tag comprising information associated with the animal.

More optionally, the method further comprises detecting the tag, using the at least one imaging device.

Optionally, the animal carcass is cut longitudinally and/or transversally.

Optionally, the method comprises processing a captured at least a pair of images to form at least one three-dimensional image, using a computing device. More optionally, the method further comprises sending the at least one three-dimensional image to the server arrangement.

Optionally, the classification system is one of: European Union EUROP grid classification system, United States Department of Agriculture grading system, South African Meat Industry Company meat classification system.

Optionally, the method comprises associating the information in the tag with the determined class of the animal carcass.

More optionally, the method further comprises receiving the determined class of the animal carcass from the server arrangement. Optionally, the method further comprises capturing at least one image of an animal prior to slaughtering of the animal, for identifying the information associated with the animal.

More optionally, identifying the information associated with the animal comprises processing the captured at least one image of the animal. In another aspect, an embodiment of the present disclosure provides a system for classifying an animal carcass, the system comprising: - at least one imaging device operable to capture at least one image of the animal carcass; and

- a server arrangement communicably coupled to the at least one imaging device via a network, the server arrangement operable to receive and process the captured at least one image to determine a class of the animal carcass,

wherein the server arrangement is operable to compare the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass wherein the class of the animal carcass is determined by combining grades corresponding to different attributes of animal carcass.

Optionally, the system further comprises a tag attached to the animal carcass, the tag comprising information associated with the animal. More optionally, the at least one imaging device is operable to detect the tag.

Optionally, the system comprises a computing device communicably coupled to the at least one imaging device and the server arrangement.

Optionally, the computing device is operable to:

- receive the captured at least one image from the at least one imaging device; and

- send the received at least one image to the server arrangement.

Optionally, the at least one imaging device comprises: a three- dimensional camera or at least one two-dimensional camera. Optionally, the at least one two-dimensional camera is operable to capture at least a pair of images of the animal carcass, and the computing device is operable to process the captured at least the pair of images to form at least one three-dimensional image. Optionally, the computing device is further operable to send the at least one three-dimensional image to the server arrangement.

Optionally, the classification system is one of: European Union EUROP grid classification system, United States Department of Agriculture grading system or South African Meat Industry Company meat classification system.

Optionally, the server arrangement comprises a database for storing the pre-defined template of the classification system and information in the tag associated with the animal. Optionally, the computing device is operable to associate the information in the tag with the determined class of the animal carcass.

Optionally, the computing device is operable to receive the determined class of the animal carcass from the server arrangement.

Optionally, the at least one imaging device is operable to capture at least one image of an animal prior to slaughtering of the animal for identifying information associated with the animal.

Optionally, the server arrangement is operable to process the captured at least one image of the animal for identifying the information associated with the animal. Optionally, the at least one imaging device is handheld by an operator and/or mounted on a slaughter line.

The present disclosure provides a method and system for classifying an animal carcass. The described method and system are easy to implement and are time-efficient. Further, the described method is free from errors introduced by manual inspection techniques, and is therefore highly accurate and reliable. Consequently, the described method and system may be used to identify deformations or birth defects in the animal carcass. Optionally, detailed estimations of amount of fat and muscle in the animal carcass may be identified and utilised to correctly price the animal carcass for benefitting animal farmers. Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments construed in conjunction with the appended claims that follow.

It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.

BRIEF DESCRIPTION OF THE DRAWINGS

The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.

Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:

FIGs. 1-2 are schematic illustrations of exemplary systems for classifying an animal carcass, in accordance with different embodiments of the present disclosure;

FIGs. 3-4 are schematic illustrations of an animal carcass, in accordance with different embodiments of the present disclosure; and FIG. 5 illustrates steps of a method for classifying an animal carcass, in accordance with an embodiment of the present disclosure.

In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing. DETAILED DESCRIPTION OF EMBODIMENTS

The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.

In one aspect, an embodiment of the present disclosure provides a method for classifying an animal carcass, the method comprising:

- capturing at least one image of the animal carcass, using at least one imaging device;

- sending the captured at least one image to a server arrangement; and

- processing the at least one image to determine a class of the animal carcass comprising comparing the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass wherein the class of the animal carcass is determined by combining grades corresponding to different attributes of animal carcass. In another aspect, an embodiment of the present disclosure provides a system for classifying an animal carcass, the system comprising:

- at least one imaging device operable to capture at least one image of the animal carcass; and

- a server arrangement communicably coupled to the at least one imaging device via a network, the server arrangement operable to receive and process the captured at least one image to determine a class of the animal carcass,

wherein the server arrangement is operable to compare the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass wherein the class of the animal carcass is determined by combining grades corresponding to different attributes of animal carcass. In an embodiment, the term 'animal' used herein refers to domestic animals reared to provide resources for human consumption. Specifically, the animal may be reared by animal farmers to provide resources such as meat, skin, wool, and so forth. In an example, the animals may include cattle, pigs, chickens, goats, sheep, and so forth. Optionally, the term 'animal' may include wild animals that may provide resources useful for human consumption.

According to an embodiment, the term 'animal carcass' used herein refers to a body of the animal. Specifically, the animal carcass may exclude undesirable parts of body of the animal. Therefore, the body of the animal may be processed to remove such undesirable parts to yield the animal carcass. It is to be understood that the undesirable parts of the body of the animal may relate to organs or parts that are exposed to the external environment and/or do not have any commercial utility for human consumption. For example, the undesirable parts may include head and feet of the animal. Specifically, the processing of the animal to yield the animal carcass may include removal of inedible organs, skin, and blood from the body of the animal. Optionally, the animal carcass may be cleaned and subjected to cold temperatures to prevent growth of micro-organisms thereon. The method for classifying the animal carcass comprises capturing at least one image of the animal carcass, using at least one imaging device. Specifically, the at least one image of the animal carcass may be captured from different angles with respect to the animal carcass, using the at least one imaging device. More specifically, the at least one image may be captured so as to capture vital features of the animal carcass. Further, the at least one imaging device may be strategically positioned to capture different aspects of the animal carcass from different perspectives.

The system for classifying an animal carcass comprises at least one imaging device operable to capture at least one image of the animal carcass. In an embodiment, the at least one imaging device may comprise a three-dimensional camera or at least one two-dimensional camera. Specifically, the three-dimensional camera may capture at least one three-dimensional image of the animal carcass, and the at least one two-dimensional camera may capture at least one two- dimensional image of the animal carcass.

In one embodiment, the three-dimensional camera may be a stereo camera. Further the at least one three-dimensional image captured using the three-dimensional camera may provide an estimation of geometry of the animal carcass. Furthermore, the at least one three- dimensional image may assist in identifying any deformation or birth defects in the animal carcass.

In an embodiment, the at least one two-dimensional camera is operable to capture at least a pair of images of the animal carcass. Specifically, the captured at least a pair of images may be captured from distinct perspectives and may be utilized to form a three-dimensional image of the animal carcass. In an example, the system may include one two- dimensional camera. Specifically, the two-dimensional camera may capture the at least a pair of images of the animal carcass sequentially. It is to be understood that position of the animal carcass may be fixed during a time period of capturing the at least a pair of images. In such instance, the two-dimensional camera may be moved from a first position to a second position with respect to the animal carcass, to capture the at least a pair of images thereof. Further, at least a pair of two-dimensional images may be captured from different angles such that at least a pair of two-dimensional images shares a set of coincident pixels. Therefore, at least a pair of two-dimensional images may be processed to obtain at least one three-dimensional image using the shared set of coincident pixels.

In another example, the system may include at least two two- dimensional cameras. Specifically, the at least two two-dimensional cameras may be placed in a mutually-spaced apart configuration proximate to a slaughter line whereat the animal carcass may be positioned. Further, the at least two two-dimensional cameras may be placed in the mutually-spaced configuration such that the at least a pair of images captured by the at least two two-dimensional cameras shares a set of coincident pixels. Furthermore, the at least a pair of images may be processed to obtain at least one three-dimensional image using the shared set of coincident pixels. Additionally, in such example, the at least a pair of two-dimensional images of the animal carcass may be captured simultaneously by externally synchronizing the at least two two-dimensional cameras. Examples of two-dimensional cameras include, but are not limited to, digital cameras, smart-phone cameras, tablet cameras, action cameras and magnetic resonance imaging (MRI) cameras. Furthermore, the at least one image of the animal carcass captured by the at least one two-dimensional camera, such as a magnetic resonance imaging (MRI) camera, may provide an estimation of fat and muscle content, degree of marbling and bone structure of the animal carcass. According to an embodiment, the animal carcass may be cut longitudinally and/or transversally. Further, the animal carcass may be cut to obtain at least one longitudinal and/or transversal sections of the animal carcass thereof. Therefore, the at least one imaging device may capture at least one image of each of the at least one longitudinal and/or transverse sections of the animal carcass.

In an embodiment, the at least one imaging device may be handheld by an operator and/or mounted on the slaughter line. In an example, the at least one imaging device may be mounted on the slaughter line at a pre-determined distance from the animal carcass. Specifically, the pre- determined distance may be in a range of 3 to 5 meters. More specifically, the pre-determined distance may be 3 meters, 3.5 meters, 4 meters, 4.5 meters, and so forth. In another example, the at least one imaging device may be handheld by the operator to capture the at least one image of the animal carcass. The method further comprises sending the captured at least one image to a server arrangement, and processing the at least one image to determine a class of the animal carcass. Specifically, the at least one imaging device may be operable to send the captured at least one image to the server arrangement, and the server arrangement may be operable to process the captured at least one image. In an embodiment, the at least one imaging device comprises directly sending the captured at least one image to the server arrangement. In such embodiment, the at least one imaging device and the server arrangement may comprise compatible communication modules. In an alternate embodiment, the captured at least one image may be sent from the at least one imaging device to the server arrangement via an intermediate devices, such as network devices, computing devices, and so forth.

The server arrangement is communicably coupled to the at least one imaging device via a network, the server arrangement operable to receive and process the captured at least one image to determine a class of the animal carcass. In an embodiment, the server arrangement may be hardware, software, firmware, or a combination of these, operable to facilitate classification of the animal carcass. Further, examples of the network include, but are not limited to, internet, short range radio, and cellular network.

According to an embodiment, the system may further comprise a computing device communicably coupled to the at least one imaging device and a server arrangement. Specifically, the computing device may be hardware, software, firmware, or a combination of these, operable to communicate with each of the at least one imaging device and the server arrangement to facilitate classification of the animal carcass. Examples of the computing device include, but are not limited to, a smart-phone, a desktop computer, a laptop computer, and a tablet computer.

In an embodiment, the computing device may be operable to receive the captured at least one image from the at least one imaging device, and send the received at least one image to the server arrangement. Specifically, the computing device may comprise a transceiver module to receive and send the captured at least one image. In an example, the computing device may receive two three-dimensional images of the animal carcass from the three-dimensional camera, and may send the two three-dimensional images to the server arrangement. In an embodiment, the method may comprise sending at the at least one three-dimensional image to the server arrangement. In an example, the at least one three-dimensional image may be sent directly from the at least one imaging device to the server arrangement. According to an embodiment, the computing device may be further operable to send the at least one three-dimensional image to the server arrangement. In one embodiment, the at least one three-dimensional image may be received by the computing device from the three- dimensional camera. In another embodiment, the computing device may be operable to process the captured at least the pair of images to form the at least one three-dimensional image. Specifically, the at least the pair of images may be two-dimensional, and may be obtained from the at least one two-dimensional camera. More specifically, the computing device may form at least one three-dimensional image using the shared set of coincident pixels, from at least a pair of images, as reference points. It is to be understood that the at least one three- dimensional image may provide an estimation of the geometry of the animal carcass. Therefore, in one embodiment, the method may comprise processing the captured at least a pair of images to form at least one three-dimensional image, using the computing device.

The method of classifying the animal carcass further comprises processing the at least one image to determine a class of the animal carcass. Specifically, the class of the animal carcass is determined by evaluating different attributes of the animal carcass depicted in the at least one image thereof, with known attributes pertaining to pre-defined (or known) classes of the animal carcass. Therefore, processing the at least one image may constitute a feature analysis step.

Furthermore, processing the captured at least one image comprises comparing the captured at least one image with a pre-defined template of a classification system to determine grades corresponding to different attributes of animal carcass. Moreover, the server arrangement is operable to compare the captured at least one image with the predefined template of the classification system to determine grades corresponding to different attributes of animal carcass. Specifically, the server arrangement may utilize pattern recognition and pattern comparison algorithms to identify similar patterns between the at least one image and the pre-defined template to determine the grades corresponding to different attributes of the animal carcass. Such pattern recognition algorithms may be used for identifying different attributes in the captured at least one image. Specifically, each of the attributes of the animal carcass is assigned a corresponding grade based on quality and characteristics thereof. For example, the server arrangement may process the captured at least one image of the animal carcass to determine degree of fat marbling in the animal carcass by comparing with the pre-defined template.

Moreover, the class of the animal carcass is determined by combining grades corresponding to different attributes of the animal carcass. Specifically, each of the grades corresponding to different attributes of the animal carcass is combined to determine the class of the animal carcass. The class of the animal carcass takes into consideration different attributes thereof and thus, is a significant measure of the quality of the animal carcass. In an example, the animal carcass may comprise a desired amount of fat marbling therein. However in such example, the animal carcass may have a diseased portion. Therefore, the class of such animal carcass is affected by presence of such diseased portions. Furthermore, the grades corresponding to different attributes of the animal carcass may be combined using mathematical operations and/or computational algorithms.

According to an embodiment, processing the at least one image may comprise comparing the at least one three-dimensional image with a pre-defined template of the classification system to determine the class of the animal carcass. Specifically, the server arrangement may be operable to compare the at least one three-dimensional image with the pre-defined template of a classification system to determine the class of the animal carcass. In one example, the server arrangement may compare the at least one three-dimensional image, captured by the three-dimensional camera, with the pre-defined template of the classification system. In an alternate example, the server may compare at least one three dimensional image, formed by processing at least the pair of two-dimensional images, with the pre-defined template of the classification system.

According to an embodiment, processing the at least one image may facilitate estimation of geometry of the animal carcass. The geometry of the animal carcass may help in determination of the amount of weight, fat, conformation, and so forth of the animal carcass. Further, due to processing of the at least one image, the server arrangement may identify any deformation or abnormalities in the animal carcass. In an example, the server arrangement may identify any diseased portions of the animal carcass from the at least one three-dimensional image. According to an embodiment, the classification system may be one of: European Union EUROP grid classification system, United States Department of Agriculture grading system, South African Meat Industry Company meat classification system. Specifically, the European Union EUROP grid classification system classifies the animal carcasses based on conformation and amount of fat in the animal carcass. In an example, the conformation of the animal carcass is determined based on amount of muscle with respect to bones in the animal carcass. Specifically, the animal carcass may be provided with a conformation grade such as E, U+ , -U, R, O+ , -O, P+ , and -P based on the conformation of the animal carcass. Further, the animal carcass may be provided with a fatness grade such as 1 , 2, 3, 4L, 4H, 5L, and 5H based on the amount of fat in the animal carcass. Furthermore, the class of the animal carcass may be determined by combining the conformation and fatness grade of the animal carcass. Additionally, the United States Department of Agriculture grading system classifies the animal carcass based on degree of maturity and degree of marbling of the animal carcass. It is to be understood that each of the aforementioned classification systems may have distinct pre-defined template associated therewith. Moreover, the pre-defined templates of the classification system may be defined on the basis of weight, fat, intramuscular fat, texture, color, conformation, and so forth, of the animal carcass.

In an embodiment, the method may further comprise capturing at least one image of an animal prior to slaughtering of the animal, for identifying the information associated with the animal. Specifically, the at least one imaging device may be operable to capture at least one image of the animal prior to slaughtering of the animal for identifying the information associated with the animal. For example, the at least one image of the animal may be captured using the magnetic resonance imaging (MRI) camera. In such example, the at least one image captured by the magnetic resonance imaging (MRI) camera may be indicative of attributes, such as weight, gender, age, and/or breed/race, of the animal. Optionally, the at least one image of the animal is captured at a slaughterhouse, and/or at a farm where the animal is reared.

According to an embodiment, identifying the information associated with the animal may comprise processing the captured at least one image of the animal. Specifically, the server arrangement may be operable to process the captured at least one image of the animal for identifying the information associated with the animal. In an embodiment, processing the captured at least one image may comprise comparing the captured at least one image with at least one pre-analysed image of an animal. Specifically, the "at least one pre- analysed image of the animal" may relate to an image that has been previously captured and the attributes of the animal thereof are known. Additionally, the at least one pre-analysed image may serve as a reference for the determining the attributes of the animal of the captured at least one image.

In another embodiment, processing the captured at least one image of the animal may comprise employing image processing algorithms to determine the attributes of the animal. In an example, the image processing algorithms may analyse distance between bone and skin of the animal to determine the amount of muscle with respect to bones in the animal. In another example, the image processing algorithms may estimate the weight of the animal based on height and conformation thereof. In yet another example, health of the animal may be estimated based on condition of skin and/or presence of any lesions or lacerations thereon.

In an embodiment, the server arrangement may comprise a database for storing the pre-defined template of the classification system. Specifically, the database may be hardware, software, firmware, or a combination of these, suitable for storing the pre-defined template of the classification system.

In an embodiment, the method may comprise tagging the animal carcass to assist classification thereof by attaching a tag to the animal carcass, the tag comprising information associated with the animal. In such embodiment, the system may further comprise the tag attached to the animal carcass. For example, the tag may be pasted or attached to a part of the animal carcass. Further, the information associated with the animal may include information relating to body, gender, age, breed/race, breeders, farm, and so forth pertaining to the animal. Examples of the tag include, but are not limited to, RFID tags, QR codes, barcodes, and electronic identification tags.

In an embodiment, the method may further comprise detecting the tag, using the at least one imaging device. Specifically, the captured at least one image may depict the tag associated with the animal, and the tag may be detected upon sending the captured at least one image to the server arrangement. More specifically, the server arrangement may facilitate detection of the tag by recognizing shape and distinct features thereof, and identify the information associated with the tag. According to an embodiment, the server arrangement may comprise the database for storing the information in the tag associated with the animal. Beneficially, information in the tag associated with the animal may assist the server arrangement in determining the class of the animal carcass. In an example, the information in tag may relate to age and gender of the animal, and may assist the server arrangement in determining the class the animal carcass according to the United States Department of Agriculture grading system. Optionally, computing device may process the captured at least one image to detect the tag and identify the information associated therewith.

According to an embodiment, the database may be further operable to store the at least one image of the animal carcass and/or the at least one image of the animal. In an embodiment, the database may be operable to associate the at least one image of the animal carcass and/or the at least one image of the animal with at least one of: the determined class of the animal, the information associated with the animal and the pre-defined template of the classification system. For example, the server arrangement may employ the at least one image of the animal carcass, associated with the determined class of the animal, along with the pre-defined template of the classification system, to determine a class of another animal carcass. Further, information stored on the database may provide a temporal log of the determined classes of the animal carcasses. Furthermore, such temporal log of the determined classes of the animal carcasses may assist future classification of other animal carcasses.

According to an embodiment, the method may comprise receiving the determined class of the animal carcass from the server arrangement. Further, the computing device is operable to receive the determined class of the animal carcass from the server arrangement. Additionally, the computing device may display the determined class of the animal in the slaughterhouse. In an example, the computing device may comprise a digital screen to display the determined class of the animal. In another example, the computing device may comprise a printer configured to print the determined class of the animal. In an embodiment, the method may further comprise associating the information in the tag with the determined class of the animal carcass. Further, the computing device is operable to associate the information in the tag with the determined class of the animal carcass. Specifically, the computing device may link the information in the tag that may be obtained via the server arrangement, with the received determined class of the animal carcass.

DETAILED DESCRIPTION OF THE DRAWINGS

Referring to FIG. 1, illustrated is a schematic illustration of an exemplary system 100 for classifying an animal carcass, in accordance with an embodiment of the present disclosure. As shown, the system 100 includes at least one imaging device, depicted as imaging devices 102 and 104, and a server arrangement 106 communicably coupled to the imaging devices 102 and 104 via a network 108. Referring to FIG. 2, illustrated is a schematic illustration of an exemplary system 200 for classifying an animal carcass, in accordance with another embodiment of the present disclosure. As shown, the system 200 includes at least one imaging device, depicted as imaging devices 202 and 204, a server arrangement 206 communicably coupled to the imaging devices 202 and 204 via a network 208, and a computing device 210 communicably coupled to the imaging devices 202 and 204, and the server arrangement 206.

Referring to FIG. 3, illustrated is a schematic illustration of an animal carcass 302, in accordance with an embodiment of the present disclosure. As shown, an axis A-A' depicts a longitudinal axis of the animal carcass 302. The animal carcass 302 may be cut along the longitudinal axis A-A'. Further, at least one imaging device (such as the imaging devices 102 and 104 of FIG. 1) is operable to capture at least one image of longitudinal halves of the animal carcass 302.

Referring to FIG. 4, illustrated is a schematic illustration of an animal carcass 402, in accordance with another embodiment of the present disclosure. As shown, axes B-B' and C-C depict transverse axes of the animal carcass 402. The animal carcass 402 may be cut along the transverse axis B-B' and/or axis C-C. Further, at least one imaging device (such as the imaging devices 102 and 104 of FIG. 1 ) is operable to capture at least one image of different transversal sections of the animal carcass 402.

Referring to FIG. 5, illustrated are steps of a method 500 for classifying an animal carcass (such as the animal carcasses 302 and 402 of FIG. 3 and FIG. 4), in accordance with an embodiment of the present disclosure. At step 502, at least one image of an animal carcass is captured using at least one imaging device. At step 504, the captured at least one image is sent to a server arrangement. At step 506, the at least one image is processed to determine a class of the animal carcass. The steps 502 to 506 are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein. Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non- exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural.