Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM FOR VISION-BASED SELF-DECISIVE PLANETARY HAZARD FREE LANDING OF A SPACE VEHICLE
Document Type and Number:
WIPO Patent Application WO/2024/052928
Kind Code:
A1
Abstract:
The present invention relates to a system for vision based self-decisive planetary hazard free landing of a space vehicle. The invention discloses the method of finding correlations between descent images captured by cameras and the system state parameters to predict the thrust control command for spacecraft descent guidance and navigation. For hazard-free landing, a hazard detection system (105) is activated, wherein the terrain under the current field of view is classified into craters, boulders, and plane surface, which is then used by thrust prediction utility (104) to take retargeting decisions. The module (103) discloses the method of generating a synthetic dataset of planetary images, and a descent trajectory state parameters through an agent-based image generative platform for an unknown planetary environment. Multi-variate deep learning models are used to predict the control actions in the form of thrust command (106) by combining the results of models (103) and (105).

Inventors:
PATIL DIPTI (IN)
BORSE JANHAVI (IN)
KUMAR VINOD (IN)
Application Number:
PCT/IN2023/050811
Publication Date:
March 14, 2024
Filing Date:
August 28, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PATIL DIPTI (IN)
BORSE JANHAVI (IN)
KUMAR VINOD (IN)
International Classes:
G06F18/20; G06N3/08
Foreign References:
CN107748895A2018-03-02
CN107817820A2018-03-20
US20200363813A12020-11-19
Download PDF:
Claims:
CLAIMS

We claim:

1. A system for vision based self-decisive planetary hazard free landing of a space vehicle; the said system comprising: a. a camera (100) mounted facing downward at the base of the space vehicle for capturing descent images (101) of the underlying planetary region of interest so as to ascertain the inputs for the said system; computed thrust commands called as control actions (106) are the outcomes from the said system which are necessary for navigating the space vehicle to next desired position; b. an ILP (Image and Landing Parameters) correlation model (103) configured to generate a trained model (206) to estimate current state of the dynamic system which is required to ascertain the current lander position, orientation and velocity along with altitude information called as dynamic system parameters ; wherein the ILP correlation model (103) computes the correlation between dynamic system parameters with descent images from the space vehicle camera (100) by training a deep learning model using transfer learning; wherein, the ILP correlation model (103) building steps comprising: i. an agent based synthetic image generator platform (200) in which the space vehicle with camera (207) is modelled in required planetary environment; wherein, images (201) are captured through the camera (207) to create a synthetic image database (202); ii. system state parameters (204) are captured through the agent based synthetic image generator platform (200), and attached with images captured (201) and stored as labels in an image state labels database (205); wherein, the agent based synthetic image generator platform (200) used to generate the synthetic image database (202) and the image state labels database (205) for planetary environment; wherein this synthetic data generation platform (200) enables deep learning model to get trained in the absence of actual data of planet; iii. a deep learning module with a multivariate convolutional neural network (CNN), regression model for image correlation using the synthetic image database (202) and the image state labels database (205) is trained to get the ILP correlation model (206) for state estimation; c. a trajectory state prediction model (104) configured to generate a trained trajectory state prediction model (303) to predict thrust commands in the form of control actions (106) to ascertain the autonomous guidance for further space vehicle navigation using transfer learning on image inputs; wherein, the trajectory state prediction model (104) building steps comprising: i. the image database (202) and the image state labels database (205) are combined to generate consecutive images with labels (300); wherein these images (300) are used to train deep learning next state prediction module (301); ii. the most probable state (302) is predicted using deep learning next state prediction module (301) which are given as inputs along with possible hazards and their location prediction (406) to train trajectory state prediction model (303); iii. a two-step validation process which reconfigures state estimates using ILP correlation model (103) and IMU sensor unit (102) which ascertain the accuracy of the estimates; d. a hazard detection model (105) configured to generate a trained model (405) to ascertain detection of possible landing hazards like craters, boulders and plane surface of the underlying region of interest using only captured descent images; wherein, steps for building the hazard detection model comprising: i. the image database (202) from where images (400) are extracted to annotate them using image annotator (401) with labels crater, boulder or plane surface and an annotated image database with localized hazards (402) is created; ii. annotated images (403) are used to train deep learning hazard detection module (404) and hazard detection model (405) is built which is then used to identify possible hazards and their location (406); e. a module (107) of determining retargeting decisions based on image input, said module comprising: i. the ILP (Image and Landing Parameters) correlation model (103) configured to generate a trained model (206) to estimate current state of the dynamic system; ii. the hazard detection model (105) for determining hazardous areas based on image inputs, wherein, it (105) detects and classifies possible hazards and computes their locations; iii. the trajectory state prediction model (104) for determining thrust actions based on image inputs further utilizes the output (406) from the hazard detection model (105) and guides the navigation system for retargeting decision;

2. The system as claimed in claim 1, wherein deep learning technique is used and configured via transfer learning by module (203) for modeling trajectory state estimation on image input; the deep Learning hazard detection module (404) configured via transfer learning, comprising a deep learning technique for building hazard detection model (405) for detecting hazards like crater, boulder, plain surface and multivariate deep learning approach is used to find safe landing position and generate Guidance, Navigation and Control system (GNC) commands for space vehicle.

3. The system as claimed in claim 1, is a process for vision based self- decisive planetary hazard free landing of the space vehicle; wherein the said process comprising: i. The onboard camera (100) mounted on space vehicle continuously captures images in descent phase of landing; these real time captured images are fed to the on board trajectory state prediction model (104), the ILP correlation model (103) and the hazard detection model (105) simultaneously; ii. The captured images are preprocessed and features are extracted in real time using convolution neural network; iii. The extracted image weights are matched with the trained weights of the ILP correlation model (103), the on board trajectory state prediction model (104) and the hazard detection model (105) simultaneously using multivariate deep learning model to: a. estimate current position, velocity, altitude of space vehicle, b. predict next trajectory state and; c. identify hazards while landing; to generate control actions for hazard free soft landing.

Description:
Description

Title of Invention: SYSTEM FOR VISION-BASED SELF- DECISIVE PLANETARY HAZARD FREE LANDING OF A

SPACE VEHICLE

Technical Field

[1 ] The present Invention generally relates to system for vision based self-decisive planetary hazard free landing of a space vehicle. In particular, the present Invention relates to generating vision-based controlling commands for planetary landing of space vehicle using artificial intelligence techniques and the system thereof.

Background Art

[2] Ever since the space exploration field has evolved the ultimate goal of various missions is to land the payload on remote celestial bodies including natural satellites, asteroids etc. so that we can closely-study those bodies. Landing the payload, which might be the autonomous vehicle or astronaut on the planetary surface is carried out through a GNC system (Guidance, Navigation and Control), which is the most critical part of such missions. As only upon successful landing we can carry out any scientific exploration by autonomous vehicles or by actual humans. For instance, in Lunar Lander or similar kinds of space vehicles, landing system involves guidance, navigation and control of relative velocity of lander. In development of the landing system, generating a thrust force to account for limited amount of available fuel and hazard-free soft landing are important milestones.

[3] In the prior work, earnest from 1960's primary the focus was development of GNC system. Various historical experiences of moon landing mainly through Apollo missions clearly highlighted the visual/ optical aid in GNC system, which enhances the mission success rate without endangering the valuable payload including the human payload. Even Apollo moon landing missions showed that actual visual inspection by astronaut aided the landing of vehicle on surface and avoiding the various hazards of terrain of moon. But autonomous landing still remains a challenge! Considering this GNC system can be full proof only when it is augmented by the vision-based navigation. Here is a great role of accurate & efficient Machine learning algorithm. GNC systems are not only playing a vital role in spacecraft ascent & descent modules but they are also important while orbiting around the target planet.

[4] Currently various vision-based navigation systems are being conceptualized and developing, but these systems have huge cost implication on mission as they are based on special hardware and system which scans using lasers or similar kind of rays to create the terrain maps. These systems are based on telemetry, in which some device project rays of specific frequencies and receiver then capture the reflected rays to create the maps of terrain. This type of system involves some special devices viz. LIDAR to transmit and to receive the signals and translate those signals to create the map. It also involves many time delays due to huge distances between communicating devices. In deciding the target orbit altitude, safe reactions to the obstacles in the orbital path, a vision-based navigation might help a lot instead of referring to traditional Star Tracker systems each time for getting exact location of the vehicle. Reference to a Star Tracker system relies on costly & heavy hardware for communication.

[5] In the prior work like the most recent terrain relative navigation demonstrated in NASA's Mars landing mission, the terrain maps are utilized for generating navigation commands. These terrain maps are 3-dimensional digital elevation maps of the underlying terrain. It requires heavy processing capabilities and a huge data containing elevation maps of all planetary regions so as to match the pattern of current region of interest.

[6] Considering aforementioned challenges, the area of vision-based navigation using machine learning techniques is further explored.

Literature related to the Invention is stated below:

CN107273929A discloses the Invention of unmanned aerial vehicle autonomous landing method based on deep synergetic neural network. In this Invention a method for autonomous landing of UAV as a drone is proposed wherein acting force on the UAV is earth's gravity. The images used are manually pre-processed, vectorised and are converted into motion kinematics of drone. These kinematic equations are used to train the neural network for generating control commands to either stop, continue or hover. Wherein, in our Invention, we claim a method for autonomous landing of space vehicle using real time descent images wherein acting force consideration is any planetary gravity. There is no manual preprocessing of the images and no motion kinematics are considered but convolutional neural network is used to extract features from real time images and are directly utilised to train the deep neural network. The objective is not only to generate control command but also generate navigation trajectory by identifying safe landing position using hazard detection and prevention method.

[7] CN1 10543182A discloses the autonomous landing control method and system for small unmanned rotorcraft. This Invention claims a method for autonomous landing of rotorcraft wherein inputs landing site images along with the altitude of the rotorcraft (through GPS) are fed to the neural network to generate output as duty cycle of the propeller motor which is the controlling body of the rotorcraft. Wherein, in our Invention, we claim a method for autonomous landing of space vehicle wherein inputs are only landing site real time images and the thrust commands are generated as output for navigation. It also involves method of trajectory state prediction along with real time hazard detection method. The proposed invention works under the consideration that the GPS system support is absent on the external planetary body.

[8] W02017177128A1 discloses systems and methods for Deep Reinforcement Learning using a Brain-Artificial Intelligence Interface. This Invention discloses a system for automatic aeroplane/flight control system similar to human pilot which takes into consideration of unpredictable situations like lightning or weather conditions. It consists of artificial neural network. Learning of the neural network takes place through demonstration method which are dependent on human pilots. As a flight control system, it is an earth-based system and makes use of GPS and other flight sensors. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle on planetary surfaces in the absence of GPS, with only image inputs through camera sensors and without human intervention. With the help of only real time images captured through on-board camera of spacecraft the spacecraft is able to find safe landing position and navigation.

[9] W02022072606A1 discloses autonomous multifunctional aerial drone. This

Invention claims include the method for autonomous navigation of multirotor drones based on artificial intelligence. It uses mix sensor data from camera, speakers, and other sensors to artificial intelligence module for aerial navigation. Infrared or Lidar sensors are used for obstacle detection. It uses GPS and GLONASS technology for guidance. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle wherein there are no availability of GPS. Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models.

[10] CN1 10825101 A discloses autonomous unmanned aerial vehicle landing method based on deep convolutional neural network. This Invention claims a method for autonomous landing of UAV using predefined height parameter, obstacle detection through pattern matching and guiding the UAV to desired location using thresholding. The landmark pattern data is generated through GPS using drone captured images. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle wherein there are no availability of GPS. Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models. There is no pattern matching or thresholding performed in our disclosed Invention.

Summary of Invention

[1 1 ] The present Invention generally relates to processing the images captured by onboard camera using deep learning for autonomous planetary landing. In particular, the Invention relates to system for vision based self-decisive planetary hazard free landing of a space vehicle and the method thereof.

[12] The primary data is in one of the forms of images or videos captured through onboard cameras of the space vehicle. While in descent, a space vehicle camera continuously captures images and/or videos of the underlying region of interest. The present Invention intends to use this data and available hardware like IMU (Inertial Measurement Unit) for more precision.

[13] The system is divided into three primary phases. In the first phase, the correlation between space craft dynamic state and the current snapshot of the region/camera image is modelled using deep learning technique. This model is used as a mapping between image and landing parameters like position, velocity and will be utilized to predict a dynamic state of the system for a real time image input, this module is named as ILP Correlation model (103). In the second phase for a given initial state next state prediction model is built for a space vehicle trajectory using memory enabled Long Short-Term Memory deep neural network. This assures the vehicle movement within permissible thrust limits. In the last phase, captured images are used for modelling hazard detection system which in turn help path planning and retargeting.

[14] In an embodiment, a system for generating control actions for a space-vehicle is provided. The system includes an on-board camera (100) for capturing images of the underlying terrain of the planet while a space craft is in descent and whose hazard-free landing spot is to be ascertained; these images are fed to the pretrained deep learning models (206), (303), and (405) obtained through processes (103), (104) and (105) respectively; on receiving an input from the camera, step (103) finds a correlation between system parameters and input images; Step (104) generates thrust values in the form of control actions (106) which can be directly fed to the control unit for further navigation; Step (105) is hazard detection model a process of detecting hazards like craters and boulders in the underlying region of interest on receiving image inputs from the camera (100); Step (106) are the outcomes from the overall process and are acceleration commands or the necessary thrust values for navigating the space craft into next desired position;

[15] In another embodiment, a process (103) for finding a correlation between captured images and system state parameters is provided by implementing the method illustrated in Figure 1 ; Step (200) is of generating synthetic databases (202) for descent images and (205) for system states; Step (203) is a deep learning module of finding correlation between image and state parameters wherein a multivariate image regression is implemented and trained using the databases (202) and (205) generated in previous steps; the end result of the process is a well-trained ILP correlation model (206) and is further are utilized in state estimation tasks as shown in figure 1 step (103).

[16] In another embodiment, a process for trajectory state prediction is provided for space vehicle descent navigation; the process (104) comprises of a deep learning assembly called as deep learning next state prediction module (301 ) wherein a long short term memory (LSTM) architecture is employed for trajectory state prediction; Step (301 ) comprises of a built-in feature extraction module; these image features are further used in the process of regression; in the end a state prediction model (303) is generated with final optimized model parameters; whereas the prediction model (303) is utilized on receiving inputs from on-board cameras to predict the best probable next state for a space vehicle; in addition to state prediction the step (104) on receiving hazard location inputs (406) from step (105) decides on retargeting the landing location;

[17] In another embodiment, a process for detection of hazards like craters or boulders on the landing site using images is provided; the architecture (105) involves image annotator (401 ) wherein images are annotated manually with bounding boxes around the hazards like craters and boulders along with its positional details; on receiving these annotated images (403) from a database (402), a deep learning architecture with transfer learning approach (404) is trained for hazard detection;

[18] An object of the present Invention is to provide a method and a system for generating control actions for a space vehicle navigation using deep learning techniques. In addition to it, a sub object of the present Invention is to provide a method and a system for generating the current position of a space vehicle in absence of GPS. Further object of the present Invention is to provide a method and a system for detection of potential hazards for taking retargeting decisions.

[19] An object of the present Invention is to, a. To enable space missions to perform soft landing of space vehicle using artificial intelligence without human intervention b. Estimate IMU parameters like velocity, acceleration, position of space vehicle with the help of captured real time images in absence of GPS system. c. Enable space missions to take real time decisions for autonomous navigation and landing in an environment of any celestial body. d. Perform hazard free landing of space vehicle on celestial body e. Decide appropriate trajectory and thrust actions for soft landing.

[20] To further clarify advantages and features of the present Invention, a more particular description of the Invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the Invention and are therefore not to be considered limiting of its scope. The Invention is described and explained with additional specificity and detail with the accompanying drawings.

Technical Problem

[21 ] The current lunar landing system is dependent on ground-based navigation & tracking. This introduces lots of delays. Conventional techniques rely on n/w servers for communication as the attitude & current position of a space vehicle is determined through onboard setup and decisions are taken accordingly. Landing trajectory of the space vehicle is pre-fed into the system using known landing spots and hence lacks self-decisiveness in the system. For manned missions, the unexpected maneuvering decisions are controlled by human subjects. Hazard-free landing is governed by costly DEM facilities.

Solution to Problem

[22] Presented Invention involves deep learning models trained on planetary image/video data. ILP correlation and thrust prediction models guarantee precise manoeuvering to the next feasible trajectory state. The Hazard detection model takes care of the hazardous situation and prompts the system for retargeting decision.

Advantageous Effects of Invention

[23] The integrated assembly of the Invention guarantees real-time autonomous trajectory planning, guidance and navigation. The system depends on real-time images captured through onboard camera, hence does not require costly DEM facilities. The landing trajectory need not be pre-known as the next step in navigation is predicted via combined outcomes of thrust prediction and hazard detection modules. Overall system brings self decisive capability.

Brief Description of Drawings

[24] These and other features, aspects, and advantages of the present Invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

Fig.1

[25] [fig-1 ] shows an architecture and overall process flow for a proposed system generating control actions for a space vehicle using on-board camera images in accordance with an embodiment of the present Invention; wherein, (100) is an onboard camera of space vehicle, (101 ) are the real time images captured through on board camera of space vehicle (100), (102) is (Inertial measurement unit) IMU sensors like LIDAR, (103) is ILP (Image and Landing Parameter) correlation model, (104) is trajectory state prediction model, (105) is hazard detection model, (106) are control actions predicted using combination of three models i.e. (103), (104) and (105).

Fig.2

[26] [fig.2] shows a block diagram for a process to build (103) i.e. ILP correlation model for finding a correlation between a captured images and system state parameters in accordance with an embodiment of the present Invention by implementing the method illustrated in Figure 1 ; wherein, (207) is camera mounted on modelled space vehicle in a simulated environment, (200) is agent based synthetic image generator platform, (201 ) are the images captured through simulated environment, (202) forms the image database, (204) are system state parameters, (205) is image state label database, (203) is deep learning module based on multivariate CNN regression for image correlation, (206) is ILP correlation model.

Fig.3

[27] [fig.3] illustrates the block diagram for the process (104) of obtaining a trained model of trajectory state prediction in accordance with an embodiment of the present Invention; wherein, (202) is image database, (205) is image state labels database, (300) is consecutive images with labels, (301 ) is deep learning next state prediction module, (202) is most probable state, (303) is trajectory state prediction model, (406) is possible hazards and their location. Fig.4

[27] [fig.4] depicts an architecture for process i.e. hazard detection model (105) of proposed system for detection of hazards like craters or boulders on the landing site using images in accordance with an embodiment of the present Invention; wherein, (202) is image database, (400) images, (401 ) is image annotator, (402) is annotated image database with localized hazards, (403) are annotated images, (404) is deep learning hazard detection module, (405) is hazard detection model, (406) is possible hazards and their location prediction.

Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present Invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present Invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.

Description of Embodiments

[28] For the purpose of promoting an understanding of the principles of the Invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the Invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the Invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the Invention relates. [29] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the Invention and are not intended to be restrictive thereof.

[30] Reference throughout this specification to "an aspect", "another aspect" or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present Invention. Thus, appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.

[31 ] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non- exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises...a" does not, without more constraints, preclude the existence of other devices or other sub- systems or other elements or other structures or other components or additional devices or additional sub- systems or additional elements or additional structures or additional components.

[32] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this Invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.

[33] Embodiments of the present Invention will be described below in detail with reference to the accompanying drawings.

[34] The present Invention generally relates to an image processing with the help of deep learning classification technique, multivariate regression technique with and without memory, and a combination of reinforcement learning technique. In particularly, the present Invention relates to a system for generating thrust commands for hazard-free descent and navigation of a space vehicle using deep learning techniques and method thereof. [35] The embodiments of the present Invention make certain assumptions or uses some language related with the space vehicle like dynamic state of a system. Following paragraph briefly explains these assumptions.

[36] Real time descent images are captured through space vehicle on-board cameras. Images are captured at some time interval. As each image so captured is at some altitude, attitude, and space vehicle is carrying some velocity, it is assumed to represent a current state of a space vehicle. It means that an image captured at time instance t represents state St. Analogously, states S_(t-1 ) and S_(t+1 ) are represented by Images at time Instance (t-1 ) and (t+1) respectively. More specifically, time interval between two images is equal to the state transition time t_st for a trajectory. A state at time t, can be expressed in terms of soft- landing parameters as follows,

S t = {P t ,V t ,a t ,T t }

= {[f,e, 0] T t , [u, v, w] T t , [a x , a y , a z ] T t , [T, a, /3] T t } and P t e IR 3 , V t E IR 3 , a t E IR 3 , T t E IR 3

[37] Figure 1 shows an architecture and overall process flow for a system generating control actions for a space vehicle using on-board camera images in accordance with an embodiment of the present invention; the process includes a lander with camera (100) for capturing descent images of the underlying planetary region; these images are fed to the pre-trained deep learning models (206), (303), and (405) obtained through processes (103), (104) and (105) respectively; the processes (103), (104), and (105) are detailed in following subsections as described in figures 2, 3, and 4; receiving an input from the camera, step (103) is instantiated to estimate current state of the dynamic system which is required to ascertain the current lander position, and velocity along with altitude information; essentially step (103) is of finding a correlation between system parameters on receiving image inputs from the camera (100); Step (103) is fine tuning the estimates based on the inputs from IMU sensor unit (102); this ascertains the accuracy of state estimates; Step (104) of predicting next dynamic state of the space craft on receiving the image input from the camera, current state parameters computed through the step (103); Step (104) generates thrust values in the form of control actions (106) which can be directly fed to the control unit for further navigation; Step (105) is a process of detecting hazards like craters and boulders in the underlying region of interest on receiving image inputs from the camera (100); Step (105) is instantiated after the altitude of less than 1 km; further in the step (105) on detection of hazardous region its result is fed as input to step (104) for making retargeting decisions wherein recompilation of state prediction and regeneration of new control actions as per retargeting decision is done; on detection of hazard-free region clean signal is provided to step (104) to continue with its generated control actions (106); Step (106) are the outcomes from the overall process and are acceleration commands or the necessary thrust values for navigating the space craft into next desired position;

[38] Figure 2 shows a block diagram for a process (103) for finding a correlation between a captured images and system state parameters in accordance with an embodiment of the present Invention by implementing the method illustrated in Figure 1 ; the system (103) includes an agent based platform (200) for generating synthetic images on receiving a planetary environment as input; the platform (200) is calibrated and fine-tuned for virtual descent of the agent on the planetary surface; the virtual system camera and agent's control unit is calibrated to generate descent images and its corresponding state parameters comprising of position, velocity, and altitude information; Step (200) of rendering synthetic images (201 ) and corresponding state parameters (204) is of generating synthetic databases (202) for descent images and (205) for system states; the process of generating databases (202) and (205) involves manual operation of landing the agent on given planetary environment through keyboard, mouse, joystick or camera interface (207); Step (203) is a deep learning module of finding correlation between image and state parameters wherein a multivariate image regression is implemented and trained using the databases (202) and (205) generated in previous steps; in the embodiment, step (203) comprises of in built convolutional neural network, and a pooling layer for feature extraction purpose; these image features are further used in the process of regression; loss function is defined to fine tune the weights of the network; pretrained models like keras image regressor are used through transfer learning technique; the end result of the process is an ILP correlation model (206) wherein all the weights are tuned and further are used in state estimation tasks as in figure 1 step 103. In mathematical form the process is described as follows. Input = I t = Image at current time t

Output = S t = Current State at time t = Actual observation S t = Model predictions t st = State transition time T f = Total flight time for each phase

The relation between image and state parameters are modelled by the function, (S_t ) =f_1 (l_t;w), wherein, co is a parameter of the model to be learnt. Training involves learning this function f_1 (*) and thus finding the most efficient parameter value co. When a new image is given input to the trained model, it will predict the state parameters corresponding to that image using optimized weight vector co.

[39] Figure 3 shows the block diagram for the process (104) of obtaining a trained model of trajectory state prediction in accordance with an embodiment of the present Invention; the process (104) comprises of a deep learning assembly called as deep learning next state prediction module (301 ) wherein a long short term memory (LSTM) architecture is employed for trajectory state prediction; an assembly (300) correlates images from image database (202) with their corresponding state labels from database (205); each time such three consecutive image-labels are given as input to the deep learning LSTM module; presence of memory in LSTM allows to remember previous states of the system thereby enhancing a prediction task; Step (301 ) comprises of a inbuilt convolutional neural network, and a pooling layer for feature extraction purpose; these image features are further used in the process of regression; loss function is defined to fine tune the weights of the network; once the minimum of the loss is achieved the prediction model (303) is generated with final optimized weights; whereas the prediction model (303) is utilized on receiving inputs from on-board cameras to predict the best probable next state for a space vehicle; in addition to state prediction the step (104) on receiving hazard location inputs (406) from step (105) decides on retargeting the landing location; Step (104) thereby ascertains hazard-free landing by issuing best possible thrust commands to control unit of a space vehicle which forms a guidance for appropriate navigation in accordance with an embodiment of the present Invention. [39] Figure 4 illustrates architecture and process (105) of proposed system for detection of hazards like craters or boulders on the landing site using images in accordance with an embodiment of the present Invention; the architecture (105) involves image annotator (401 ) wherein images (400) from database (202) are taken as inputs and the images are annotated manually with bounding boxes around the hazards like craters and boulders along with its positional details; this yields localization of hazards and results into a database (402) containing annotated images; on receiving these annotated images (403) from a database (402), a deep learning architecture called as the deep Learning hazard detection module (404) is trained for hazard detection; the deep Learning hazard detection module (404) comprises of convolutional neural network with max pooling layer for feature extraction from these images; further to that it comprises of a deep image classification network layers for classifying the images into three categories namely crater, boulder, and plane surface; the network is trained according to the available image labels and a gradient descent optimization for minimization of loss function; after training a hazard detection model is received as a output which is further utilized in detection of possible hazards (406) in real time on unseen onboard camera images.

Examples

[40] Example 1

Example 1 describes an event where a spacecraft is about to land on a planet and loses contact with the earth's base station. In such a situation, the system of present invention will take over the control. The system will estimate the current state of the spacecraft and predict the next navigation state using the thrust prediction module. The hazard detection module will signal the navigation module whether to next state is hazard-free or not. If the next state is hazard-free, then the spacecraft will be navigated to that state otherwise, a retargeting decision will be made. This process will be repeated unless less than 1 m altitude is reached. At the end of the process the last state of the autonomous decisions will be the prefered landing spot. Thus in loss of the communication, the system can take its own decisions and land safely on the planetary surface. Example 2

Example 2 describes an event where an aeroplane travels with some speed at higher altitudes. Due to bad weather conditions or some technical issues, the GPS system fails. In such a situation, the proposed system is useful for generating autonomous landing trajectory sequences if configured to be trained on the earth images. ILP Correlation and state prediction modules are useful for the navigation of the plane to the desired location.

Industrial Applicability

[41 ] The system can be configured to fit for guidance and navigation in any planetary landing mission, in the Aviation industry for autonomous landing of air vehicles in the absence of GPS. The extended applications are in robotics navigation and guidance.

Citation List

[42] Citation List follows:

Patent Literature

[43] PTL 1 discloses the Invention of unmanned aerial vehicle autonomous landing method based on deep synergetic neural network. In this Invention a method for autonomous landing of UAV as a drone is proposed wherein acting force on the UAV is earth's gravity. The images used are manually pre-processed, vectorised and are converted into motion kinematics of drone. These kinematic equations are used to train the neural network for generating control commands to either stop, continue or hover. Wherein, in our Invention, we claim a method for autonomous landing of space vehicle using real time descent images wherein acting force consideration is any planetary gravity. There is no manual pre-processing of the images and no motion kinematics are considered but convolutional neural network is used to extract features from real time images and are directly utilised to train the deep neural network. The objective is not only to generate control command but also generate navigation trajectory by identifying safe landing position using hazard detection and prevention method.

[44] PTL 2 discloses the autonomous landing control method and system for small unmanned rotorcraft. This Invention claims a method for autonomous landing of rotorcraft wherein inputs landing site images along with the altitude of the rotorcraft (through GPS) are fed to the neural network to generate output as duty cycle of the propeller motor which is the controlling body of the rotorcraft. Wherein, in our Invention, we claim a method for autonomous landing of space vehicle wherein inputs are only landing site real time images and the thrust commands are generated as output for navigation. It also involves method of trajectory state prediction along with real time hazard detection method. The proposed invention works under the consideration that the GPS system support is absent on the external planetary body.

[45] PTL 3 discloses systems and methods for Deep Reinforcement Learning using a Brain-Artificial Intelligence Interface. This Invention discloses a system for automatic aeroplane/flight control system similar to human pilot which takes into consideration of unpredictable situations like lightning or weather conditions. It consists of artificial neural network. Learning of the neural network takes place through demonstration method which are dependent on human pilots. As a flight control system, it is an earth-based system and makes use of GPS and other flight sensors. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle on planetary surfaces in the absence of GPS, with only image inputs through camera sensors and without human intervention. With the help of only real time images captured through on-board camera of spacecraft the spacecraft is able to find safe landing position and navigation.

[45] PTL 4 discloses autonomous multifunctional aerial drone. This Invention claims include the method for autonomous navigation of multirotor drones based on artificial intelligence. It uses mix sensor data from camera, speakers, and other sensors to artificial intelligence module for aerial navigation. Infrared or Lidar sensors are used for obstacle detection. It uses GPS and GLONASS technology for guidance. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle wherein there are no availability of GPS. Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models.

[46] PTL 5 discloses autonomous unmanned aerial vehicle landing method based on deep convolutional neural network. This Invention claims a method for autonomous landing of UAV using predefined height parameter, obstacle detection through pattern matching and guiding the UAV to desired location using thresholding. The landmark pattern data is generated through GPS using drone captured images. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle wherein there are no availability of GPS. Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models. There is no pattern matching or thresholding performed in our disclosed Invention.

[47] PTL 1 : Patent CN 107273929A

[48] PTL 2: Patent CN1 10543182A

[49] PTL 3: Patent W02017177128A1

[50] PTL 4: Patent WG2022072606A1

[51 ] PTL 5: Patent CN1 10825101 A

Non Patent Literature

[52] NPL 1 refers to a design that integrated guidance & navigation functions using Recurrent CNNs, which provided a functional relationship between optical images captured during landing & required thrust actions. Further employed a DAgger method to improve deep learning performance. But it requires an expert to augment a database by exploiting human corrective actions, which is hard to find in space. Vehicle landing problem is considered in two different steps: altitude reduction (1 D) & translational motion (2D), while it is more realistic to consider it as a 3D space maneuver.

[53] NPL 1 : R. Furfaro et al., “Deep learning for autonomous lunar landing,” in 2018 AAS/AIAA Astrodynamics Specialist Conference, 2018, pp. 1-22?