Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN AUTONOMOUS MOBILE CONCRETE MIXER TO AUTOMATICALLY RECOGNIZE AND IDENTIFY MATERIALS OF VARIOUS TYPE
Document Type and Number:
WIPO Patent Application WO/2023/203569
Kind Code:
A1
Abstract:
The present disclosure discloses an autonomous mobile concrete mixer to automatically recognize and identify materials of various type. The mixer includes a memory (110) adapted to store at least one recipe including an order of materials and weight of each material for mixing, a keypad (112) adapted to receive inputs selecting one of the at least recipe stored in the memory (110), a loading arm (308) including a bucket (208) to collect the material from martials heap, a camera (102) adapted to capture image of heap and bucket (210) of the loading arm (308) when the bucket (210) is in a predefined vicinity of the material heap, a controller (108) configured to receive the captured image from the camera (102), wherein the controller (108) is configured to identify the material from the image.

Inventors:
- VIJAY (IN)
Application Number:
PCT/IN2023/050184
Publication Date:
October 26, 2023
Filing Date:
February 28, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AJAX ENGINEERING PRIVATE LTD (IN)
International Classes:
B28C9/04; B01F29/63; B01F33/502; B60P3/16
Foreign References:
GB2392502A2004-03-03
CN209718165U2019-12-03
Attorney, Agent or Firm:
K LAW (KRISHNAMURTHY AND CO.) (IN)
Download PDF:
Claims:
Claims:

We Claim:

1. An autonomous mobile concrete mixer to automatically recognize and identify materials of various type, the mixer comprising: a memory (110) adapted to store at least one recipe including an order of materials and weight of each material for mixing; a keypad (112) adapted to receive inputs selecting one of the at least recipe stored in the memory (110); a loading arm (308) including a bucket (208) to collect the material from martials heap; a camera (102) adapted to capture image of heap and bucket (210) of the loading arm (308) when the bucket (210) is in a predefined vicinity of the material heap; a controller (108) configured to receive the captured image from the camera (102), wherein the controller (108) is configured to identify the material from the image, wherein when the identified material is identified as different material in order of materials of the selected recipe, the controller (108) transmits an error message to a display unit (112) of the mixer, wherein when the identified material is identified as material in order of materials of the selected recipe, the loading arm (308) collects the material from the material heap into the bucket (210) attached with the loading arm (308), wherein the loading arm (308) includes a weigh batching system consists of pressure transducers fitted in the loading arm, or load cells fitted on the loading arm which are connected to a controller of the weigh batching system, wherein the weigh batching system measures a weight of the material collected into the bucket.

2. The autonomous mobile concrete mixer as claimed in claim 1 , wherein loading arm (308) includes a proximity switch (310) or inclinometer that triggers the weighing batching system at suitable predefined position of the loading arm (308), wherein the controller of the weighing batching system reads the pressure and converts into material weight or reads the load cell value and converts into material weight value, and wherein an outer surface of a bottom side (302) of the bucket (210) includes a flange (304) perpendicular to the bottom side (302) of the bucket (210), wherein the loading arm (308) is connected to a lower portion of the flange (304), and wherein the pressure transducer is connected to the flange (304) of the bucket (210), wherein the pressure transducer measures a weight of the material collected into the bucket when the open face of the bucket (210) is in a direction opposite to the ground. The autonomous mobile concrete mixer as claimed in claim 1 , comprises a drum (206) to mix the collected materials, wherein the bucket (210) puts the materials collected from the material heap to the drum (206), and wherein when the converted material weight is less than the weight included in the recipe for the particular material, the controller (108) transmits a message indicating load more to the display unit (112) of the mixer, and when the converted material weight is equals to the weight included in the recipe for the particular material, the controller (108) transmits a message indicating move to next material to the display unit (112) of the mixer, and wherein when the converted material weight is more to the weight included in the recipe for the particular material, the controller (108) transmits an error message indicating weight is more to the display unit (112) of the mixer. The autonomous mobile concrete mixer as claimed in claim 1, wherein the camera (102) has a wide area covering part of the bucket (210) and the material head when the bucket is in the predefined vicinity of the material heap, wherein the camera (102) is positioned on a back side of a driver cabin, and wherein the camera (102) is configured to capture the image at constant time interval and wherein the captured images are transmitting to the memory (110), wherein the controller (108) classifies the images in one of a plurality of classes of images, wherein each of the plurality of classes of images belong to a particular material type.

5. The autonomous mobile concrete mixer as claimed in claim 4, wherein the controller (108) identifies an image of the captured images where the bucket (210) touches the material heap and classify the captured image to one of the plurality of classes of images, and wherein to classify the captured image in one of the plurality of classes of images, the controller (108) is configured to: identify a first contour of the image, wherein to identify the contour, the preprocessor locates points in the image having constant gray level; identify shapes of material and texture features from the image to classify the object based on a knowledge vector representation of the image, wherein identifying the shapes and textures features includes: determining basic patterns in the image and calculating a frequency of the image; counting the determined patterns using a plurality of predefined patterns to determine a frequency of the image, wherein the frequencies of all the patterns is represented as an array or vector; and classify the material by feeding the vector into a neural network.

6. The autonomous mobile concrete mixer as claimed in claim 5, wherein the controller (108) detects all objects from the image in one pass using the signal shot detector that includes at least one convolution layers, and wherein the controller (108) determine that the image is classified in a class that is similar to a class corresponding to in order of materials of the selected recipe, in response to a determination that the determined image is classified in a class that is different to the class corresponding to in order of materials of the selected recipe, the controller (108) transmits the error message to the display unit of the mixer. The autonomous mobile concrete mixer as claimed in claim 5, comprises a printer connected with the controller (108), wherein the controller (108) is configured to receive the order and weight of materials collected into the drum, and the controller is adapted to transmit the received order and weight of materials to the printer so that printer can print a ledger report of the mixed material. The autonomous mobile concrete mixer as claimed in claim 1, includes a second camera (104) positioned in such a way that the lens of the camera covers the material heap and a part of the bucket (210), wherein a view of the camera is different from a view of the second camera, wherein in case of obstacles present in the images captured by the camera identification of material is done based on images captured by the second camera (104), and wherein the camera (102) is provided on at least one of top of cabin, inside the cabin, left hand side of the drum support frame, right hand side of the drum support frame, center of drum support frame. The autonomous mobile concrete mixer as claimed in claim 8, wherein the camera (102) is placed on at least one of: left side of drum support frame and the second camera (104) is placed on right side of the drum support frame, or wherein the camera (102) is placed on left side of drum support frame and the second camera (104) is placed on top of the cabin, or wherein the camera (102) is placed on left side of drum support frame and the second camera (104) is placed inside the cabin, or wherein the camera (102) is placed on center of drum support frame and the second camera (104) is placed on top of the cabin, or wherein the camera (102) is placed on center of drum support frame and the second camera (104) is placed inside the cabin.

. The autonomous mobile concrete mixer as claimed in claim 1, wherein the memory stores a plurality of images of material heaps, wherein the microcontroller deploys a deep neural network to: train the plurality of images of material heaps, wherein each of the plurality of images are trained based on a plurality of layers of nural network, and validate the each of the trained images with a predefined category of images.

Description:
AN AUTONOMOUS MOBILE CONCRETE MIXER TO AUTOMATICALLY RECOGNIZE AND IDENTIFY MATERIALS OF VARIOUS TYPE

TECHNICAL FIELD

[0001] The present subject matter described herein is in the field of concrete mixing. In particular, the present subject matter relates to an autonomous mobile concrete mixer to automatically recognize and identify materials of various type.

BACKGROUND OF THE INVENTION

[0002] Conventional mobile concrete mixer is a mobile auto loading, in which batching and concrete production machine which is equipped with a mixing drum, loading arm with bucket for charging materials and various other components like water tank, pump and water meter etc., for the production of high quality concrete and transport. Such conventional mobile concrete mobile mixer offers complete independence for the production and transportation of concrete

[0003] Further, conventional concrete mixers have a bucket attached to the loading arm to pick various material (like aggregate stones of sizes 5mm, 10mm, 20mm, 40mm etc., river sand, manufactured sand, cement, micro silica, fly ash etc.) from the heaps on the ground and load into the drum in sequential order for concrete production. The operator of the concrete mixer visually identifies the material, moves the mobile mixer to the identified material heap, picks the material, and loads into the drum in sequential order for concrete production.

[0004] The major challenge with the conventional mobile concrete mixers is that visualization and identification of right material is to be done by the operator every time. Wrong pick of the material can produce bad concrete and have serious consequences in the concreted structure. It can also result in rejection of concrete during quality check causing loss of time, material and money. Also it is possible to pick wrong material deliberately and thus tamper with the quality to reduce cost and save money. [0005] Also, with the conventional mobile concrete mixers it is a must that operator picks the right material every time as per recipe and in sequence. Also as the process is manual, total time for production becomes higher.

[0006] In one solutiobn, a transportable concrete mixing plant is proposed, comprising a number of mixing plant components which can be connected detachably to one another and which during transport are accommodated in a number of containers (Cl, C2, C3, C4, C5, C6, C7, C8, C9, CIO, Cl l, C12, C13), at least some of these containers (Cl, C2, C3, C4, C5, C6, C7, C8, C9, CIO, Cl l, C12, C13), preferably all of these containers, being standard shipping containers or being capable of being combined into standard shipping containers which can be transported in a standard way in accordance with international regulations and, when the mixing plant is operating, serving as a load-bearing structure for mixing plant components and/or containers for concrete raw materials.

[0007] However, aforesaid mention state of arts fails to detect any mechanism to automatically recognize and identify materials of various type. Therefore, there is a need to provide an autonomous mobile concrete mixer to automatically recognize and identify materials of various type.

OBJECTS OF THE DISCLOSURE

[0008] It forms an object of the present disclosure to overcome the aforementioned and other drawbacks/limitations in the existing solutions available in the form of related prior arts.

[0009] It is a primary object of the present disclosure an autonomous mobile concrete mixer to automatically recognize and identify materials of various type.

[0010] It is another object of the present disclosure to provide an autonomous mobile concrete mixer eliminating the human intervention in picking and batching material for production of quality concrete and also increases productivity.

[0011] It is another object of the present disclosure to provide an autonomous mobile concrete mixer that generate alert in case of out of order pick of material. [0012] It is another object of the present disclosure to provide an autonomous mobile concrete mixer that generate alert when amount of material to be mixed is different from the prescribed amount.

[0013] These and other objects and advantages of the present subject matter will be apparent to a person skilled in the art after consideration of the following detailed description taken into consideration with accompanying drawings in which preferred embodiments of the present subject matter are illustrated.

SUMMARY

[0014] A solution to one or more drawbacks of existing technology and additional advantages are provided through the present disclosure. Additional features and advantages are realized through the technicalities of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered to be a part of the claimed disclosure.

[0015] The present disclosure offers a solution in the form of an autonomous mobile concrete mixer to automatically recognize and identify materials of various type. The mixer includes a memory, a keypad, a display, loading arm, a camera, a controller. 5 The memory is adapted to store at least one recipe including an order of materials and weight of each material for mixing. The keypad is adapted to receive inputs selecting one of the at least recipe stored in the memory. The loading arm including a bucket to collect the material from martials heap. The camera adapted to capture image of heap and bucket of the loading arm when the bucket is in a predefined vicinity of the material heap. The controller configured to receive the captured image from the camera, wherein the controller is configured to identify the material from the image, wherein when the identified material is identified as different material in order of materials of the selected recipe, the controller transmits an error message to a display unit of the mixer, wherein when the identified material is identified as material in order of materials of the selected recipe, the loading arm collects the material from the material heap into bucket attached with the loading arm, wherein the loading arm includes a weigh batching system consists of pressure transducers fitted in the loading arm hydraulic circuit, or load cells fitted on the loading arm which are connected to a controller of the weigh batching system, wherein the weigh batching system measures a weight of the material collected into the bucket.

[0016] In an aspect of the invention, loading arm includes a proximity switch or inclinometer that triggers the weighing batching system at suitable predefined position of the loading arm, wherein a controller of the weighing batching system reads the pressure and converts into material weight or reads the load cell value and converts into material weight value.

[0017] In an aspect of an invention, an outer surface of a bottom side of the bucket includes a flange perpendicular to the bottom side of the bucket, wherein the loading arm is connected to a lower portion of the flange by a hinge.

[0018] In an aspect of an invention, the pressure transducer is connected to the flange of the bucket, wherein the pressure transducer measures a weight of the material collected into the bucket hem the open face of the bucket is in a direction opposite to the ground.

[0019] In an aspect of an invention, includes a drum to mix the collected materials, wherein the bucket puts the materials collected from the material heap to the drum.

[0020] In an aspect of an invention, when the converted material weight is less than the weight included in the recipe for the particular material, the controller transmits a message indicating load more to a display unit of the mixer.

[0021] In an aspect of an invention, when the converted material weight is equals to the weight included in the recipe for the particular material, the controller transmits a message indicating move to next material to a display unit of the mixer.

[0022] In an aspect of an invention, when the converted material weight is more to the weight included in the recipe for the particular material, the controller transmits an error message indicating weight is more to a display unit of the mixer.

[0023] In an aspect of an invention, the camera has a wide area covering part of the bucket and the material head when the bucket is in the predefined vicinity of the material heap, wherein the camera is positioned on a back side of a driver cabin. [0024] In an aspect of an invention, the camera is configured to capture the image at constant time interval and wherein the captured images are transmitting to the memory, wherein the controller classifies the images in one of a plurality of classes of images, wherein each of the plurality of classes of images belong to a particular material type.

[0025] In an aspect of an invention, the controller identifies an image of the captured images where the bucket touches the material heap and classify the captured image to one of the plurality of classes of images.

[0026] In an aspect of an invention, to classify the captured image in one of the plurality of classes of images, the controller identify a first contour of the image, wherein to identify the contour, the preprocessor locates points in the image having constant gray level, identify shapes of material and texture features from the image to classify the object based on a knowledge vector representation of the image, wherein identifying the shapes and textures features includes determining basic patterns in the image and calculating a frequency of the image, counting the determined patterns using a plurality of predefined patterns to determine a frequency of the image, wherein the frequencies of all the patterns is represented as an array or vector, classify the material by feeding the vector into a neural network.

[0027] In an aspect of an invention, the controller detects all objects from the image in one pass using the signal shot detector that includes at least one convolution layers.

[0028] In an aspect of an invention, the controller determine that the image is classified in a class that is similar to a class corresponding to in order of materials of the selected recipe, in response to a determination that the determined image is classified in a class that is different to the class corresponding to in order of materials of the selected recipe, the controller transmits the error message to the display unit of the mixer.

[0029] In an aspect of an invention, the mixer includes a printer connected with the controller, wherein the controller is configured to receive the order and weight of materials collected into the drum, and the controller is adapted to transmit the received order and weight of materials to the printer so that printer can print a ledger report of the mixed material.

[0030] In an aspect of an invention, includes a second camera positioned in such a way that the lens of the camera covers the material heap and a part of the bucket, wherein a view of the camera is different from a view of the second camera, wherein in case of obstacles present in the images captured by the camera identification of material is done based on images captured by the second camera.

[0031] In an aspect of an invention, the camera is connected with a server and adapted to transmit the live feed to a server.

[0032] In an aspect of an invention, the mixer includes a hub or a switch wherein the camera is connected to a hub by a wired connection and the controller is connected to the switch by a wired connection.

[0033] In an aspect of an invention, the camera is provided on at least one of top of cabin, inside the cabin, left hand side of the drum support frame, right hand side of the drum support frame, center of drum support frame.

[0034] In an aspect of an invention, the camera is placed on left side of drum support frame and the second camera is placed on right side of the drum support frame, or wherein the camera first camera is placed on left side of drum support frame and the second camera is placed on top of the cabin, or wherein the camera first camera is placed on left side of drum support frame and the second camera is placed inside the cabin, or wherein the camera first camera is placed on center of drum support frame and the second camera is placed on top of the cabin, or wherein the camera first camera is placed on center of drum support frame and the second camera is placed inside the cabin.

[0035] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.

BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS

[0036] It is to be noted, however, that the appended drawings illustrate only typical embodiments of the present subject matter and are therefore not to be considered for limiting of its scope, for the present disclosure may admit to other equally effective embodiments. The detailed description is described with reference to the accompanying figures. In the figures, a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system or methods or structure in accordance with embodiments of the present subject matter are now described, by way of example, and with reference to the accompanying figures, in which:

[0037] Fig. 1 illustrates a block diagram of electronic components of an autonomous mobile concrete mixer to automatically recognize and identify materials of various type according to the present disclosure;

[0038] Fig. 2 illustrates an autonomous mobile concrete mixer to automatically recognize and identify materials of various type according to the present disclosure;

[0039] Fig. 3 illustrates an autonomous mobile concrete mixer with bucket to automatically recognize and identify materials of various type according to the present disclosure;

[0040] Fig. 4 illustrates a top view of an autonomous mobile concrete mixer to automatically recognize and identify materials of various type according to the present disclosure;

[0041] Fig. 5 and 6 illustrate table which disclose different positioning of placement of camera and second camera with respect to different type i.e. front or rear of loading mixtures and different positioning of cabin i.e. left or right respectively; [0042] Fig. 7 illustrates a table which explain test results for calculating accuracy of classification of images by a 20 layered Al model; and

[0043] Fig. 8 illustrates a table which explain test results for calculating accuracy of classification of images by a 20 layered Al model.

[0044] The figures depict embodiments of the present subject matter for illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.

DETAILED DESCRIPTION OF INVENTION

[0045] The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.

[0046] It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.

[0047] It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality /acts involved.

[0048] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

[0049] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.

[0050] Hereinafter, a description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the present disclosure.

[0051] Fig. 1 illustrates a block diagram of electronic components of an autonomous mobile concrete mixer to automatically recognize and identify materials of various type according to the present disclosure. The a 101 includes autonomous concrete mobile mixer 100 includes a camera 102, a second camera 104, a switch/hub 106, a controller 108, a memory 110, a display unit 112, and a keypad unit 114. The camera (102) is connected with a server (112), and adapted to transmit the live feed to the server (112). Further, the camera (102) is connected to the hub (106) by a wired connection and the controller (102) is connected to the switch (106) by a wired connection. The memory is connected with the controller and is adapted to store the image captured from the camera. Further, the memory is adapted to store a plurality of recipes including an order and weight of materials to be mixed. Further, the keypad 114 is adapted to receive an input selecting one of the plurality of recopies. Further, the display unit is connected the controller and is adapted to display a plurality of messages.

[0052] Fig. 2 illustrates an autonomous mobile concrete mixer to automatically recognize and identify materials of various type according to the present disclosure. Figure 2 illustrates the mixer including a cabin 202, the camera 102, a drum 206, a bucket 210. Also the fig. 2 illustrates a material heap 208. The mixer also includes a loading arm (308) (as illustrated in Fig. 3) including a bucket (208) to collect the material from martials heap. The camera (102) adapted to capture image of heap and bucket (210) of the loading arm (308) when the bucket (210) is in a predefined vicinity of the material heap. The controller (108) is configured to receive the captured image from the camera (102), the controller (108) is configured to identify the material from the image, wherein when the identified material is identified as different material in order of materials of the selected recipe, the controller (108) transmits an error message to a display unit (112) of the mixer. Further, when the identified material is identified as material in order of materials of the selected recipe, the loading arm (308) collects the material from the material heap into the bucket (210) attached with the loading arm (308), wherein the loading arm (308) includes a weigh batching system consists of pressure transducers fitted in the loading arm, or load cells fitted on the loading arm which are connected to a controller of the weigh batching system, wherein the weigh batching system measures a weight of the material collected into the bucket. Further, a drum (206) is to mix the collected materials and the bucket (210) puts the materials collected from the material heap to the drum (206). Further, the camera (102) has a wide area covering part of the bucket (210) and the material head when the bucket is in the predefined vicinity of the material heap, wherein the camera (102) is positioned on a back side of a driver cabin.

[0053] In an aspect, the camera (102) is configured to capture the image at constant time interval and wherein the captured images are transmitting to the memory (110), wherein the controller (108) classifies the images in one of a plurality of classes of images, wherein each of the plurality of classes of images belong to a particular material type.

[0054] In an aspect, the controller (108) identifies an image of the captured images where the bucket (210) touches the material heap and classify the captured image to one of the plurality of classes of images.

[0055] In an aspect to classify the captured image in one of the plurality of classes of images, the controller (108) identify a first contour of the image, wherein to identify the contour, the preprocessor locates points in the image having constant gray level, identify shapes of material and texture features from the image to classify the object based on a knowledge vector representation of the image, wherein identifying the shapes and textures features includes: determining basic patterns in the image and calculating a frequency of the image, counting the determined patterns using a plurality of predefined patterns to determine a frequency of the image, wherein the frequencies of all the patterns is represented as an array or vector, and classify the material by feeding the vector into a neural network.

[0056] In an aspect, the controller (108) detects all objects from the image in one pass using the signal shot detector that includes at least one convolution layers.

[0057] In an aspect, the controller (108) determine that the image is classified in a class that is similar to a class corresponding to in order of materials of the selected recipe. In response to a determination that the determined image is classified in a class that is different to the class corresponding to in order of materials of the selected recipe, the controller (108) transmits the error message to the display unit of the mixer.

[0058] In an aspect, the mixer includes a printer connected with the controller (108), wherein the controller (108) is configured to receive the order and weight of materials collected into the drum, and the controller is adapted to transmit the received order and weight of materials to the printer so that printer can print a ledger report of the mixed material. [0059] In an aspect, the memory stores a plurality of images of material heaps, wherein the microcontroller deploys a deep neural network to train the plurality of images of material heaps, wherein each of the plurality of images 5 are trained based on a plurality of layers of nural network, and validate the each of the trained images with a predefined category of images.

[0060] In an aspect, to classify the image deep neural network is applied. To classify the images, different images of same object undergoes training with multiple hidden layers. The inputs are also set with fixed-size of the 224x224 RGB image. The convolution process is configured with MobileNet as it produces an efficient convolution neural networks. MobileNet is used as the ‘trainer’ as it consists of small efficient of deep neural networks (DNN). It has two ways to configure this MobileNet which is the first one is input image resolution and the size of the model within MobileNet. Further, the image classification is implemented using TensorFlow. Classificaiton of images starts by collecting images of the same type of object. After that, DNN is applied to train the model. Running for validation or testing and if it is not the image of a particular object that supposedly acts as output then it needs to start over again from DNN. The process ends after the output is classified into the right type of object. Further, the classification starts with inserting sets of object images as an 20 input. After that, all of these input images undergo ‘training’ with the deep neural network (DNN). The deep neural network (DNN) had to train all of these sets of data until the systems recognize each of inputted images. Then, each of the classifications occurred when one of the images being tested whether it belongs to any of these predetermined types of objects/materials.

[0061] Fig. 3 illustrates an autonomous mobile concrete mixer with bucket to automatically recognize and identify materials of various type according to the present disclosure. As illustrated in Fig. 3 the loading arm (308) includes a proximity switch (310) or inclinometer that triggers the weighing batching system at suitable predefined position of the loading arm (308), wherein the controller of the weighing batching system reads the pressure and converts into material weight or reads the load cell value and converts into material weight value.

[0062] In an aspect, an outer surface of a bottom side (302) of the bucket (210) includes a flange (304) perpendicular to the bottom side (302) of the bucket (210), wherein the loading arm (308) is connected to a lower portion of the flange (304).

[0063] In an aspect, the pressure transducer is connected to the flange (304) of the bucket (210), wherein the pressure transducer measures a weight of the material collected into the bucket when the open face of the bucket (210) is in a direction opposite to the ground.

[0064] In an aspect, when the converted material weight is less than the weight included in the recipe for the particular material, the controller (108) transmits a message indicating load more to the display unit (112) of the mixer.

[0065] In an aspect, when the converted material weight is equals to the weight included in the recipe for the particular material, the controller (108) transmits a message indicating move to next material to the display unit (112) of the mixer.

[0066] In an aspect, when the converted material weight is more to the weight included in the recipe for the particular material, the controller (108) transmits an error message indicating weight is more to the display unit (112) of the mixer.

[0067] Fig. 4 illustrates a top view of an autonomous mobile concrete mixer to automatically recognize and identify materials of various type according to the present disclosure. Fig. 4 illustrates one of the many exemplary positions of the camera and the second camera. In an aspect, the second camera (104) positioned in such a way that the lens of the camera covers the material heap and a part of the bucket (210), wherein a view of the camera is different from a view of the second camera, wherein in case of obstacles present in the images captured by the camera identification of material is done based on images captured by the second camera (104). [0068] In an aspect, wherein the camera (102) is provided on at least one of top of cabin, inside the cabin, left hand side of the drum support frame, right hand side of the drum support frame, center of drum support frame.

[0069] In an aspect, the camera (102) is placed on left side of drum support frame and the second camera (104) is placed on right side of the drum support frame, or the camera (102) is placed on left side of drum support frame and the second camera (104) is placed on top of the cabin, or the camera (102) is placed on left side of drum support frame and the second camera (104) is placed inside the cabin, or wherein the camera (102) is placed on center of drum support frame and the second camera (104) is placed on top of the cabin, or the camera (102) is placed on center of drum support frame and the second camera (104) is placed inside the cabin.

[0070] Fig. 5 & 6 illustrates table which disclose different positioning of placement of camera and second camera with respect to different type i.e. front or rear of loading mixtures and different positioning of cabin i.e. left or right.

[0071] Experimental Data

[0072] For the initial development, about 300 images of each material type were provided for Al model training. Further, material of sizes 10 mm, 20 mm, 40 mm and M Sand images were provided. Images of the material taken were from fixed distances of 1.5 m and 3 m, equivalent to camera position on drum base frame and in cabin near windshield with bucket at material picking position. For experimental results two Al models - 20 layered & 14 layered, respectively were selected. Of the images of 4 aggregates provided, 1045 were used to train both the above Al models. 131 images were used to test the above models. In an aspect, 1.5 m and 3 m distance images were combinedly used for training and testing. After training of the model, 117 images prediction was accurate out of 131 testing images in the 20 layered Al model, which equals 89% accuracy. Also 120 images prediction was accurate out of 131 testing images in the 14 layered Al model, equals 91% accuracy. Accuracy level of images nearer to material (drum base frame), was higher >92% and that from cabin inside about 87% (for 14 layered Al model). With such a small sample set and combined data set of 1.5 m and 3 m, distance images, the accuracy level achieved is found to be highly encouraging.

[0073] A table illustrated in Fig. 7 explain test results for calculating accuracy o classification of images by a 20 layered Al model. A table illustrated in Fig. 8 explain test results for calculating accuracy o classification of images by a 20 layered Al model.

[0074] 117 images prediction was accurate out of 131 testing images in the 20 layered Al model, which equals 89% accuracy. 120 images prediction was accurate out of 131 testing images in the 14 layered Al model, equals 91% accuracy. Accuracy level of images nearer to material (drum base frame), was higher >92% and that from Cabin inside about 87% (for 14 layered Al model). With such a small sample set and combined data set of 1.5 m and 3 m, distance images, the accuracy level achieved is found to be highly encouraging.

[0075] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation, no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to disclosures containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. Also, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general, such construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances, where a convention analogous to “at least one of A, B, or C, etc.” is used, in general, such construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”

[0076] It will be further appreciated that functions or structures of a plurality of components or steps may be combined into a single component or step, or the functions or structures of one-step or component may be split among plural steps or components. The present disclosure contemplates all of these combinations. Unless stated otherwise, dimensions and geometries of the various structures depicted herein are not 5 intended to be restrictive of the disclosure, and other dimensions or geometries are possible. Also, while a feature of the present disclosure may have been described in the context of only one of the illustrated embodiments, such feature may be combined with one or more other features of other embodiments, for any given application. It will also be appreciated from the above that the fabrication of the unique structures herein and the operation thereof also constitute methods in accordance with the present disclosure. The present disclosure also encompasses intermediate and end products resulting from the practice of the methods herein. The use of “comprising” or “including” also contemplates embodiments that “consist essentially of’ or “consist of’ the recited feature.

Y1