Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM PROVIDING PREDICTION OF COMMUNICATION CHANNEL PARAMETERS
Document Type and Number:
WIPO Patent Application WO/2021/076084
Kind Code:
A1
Abstract:
This invention is related to a system (1) that provides fast and precise prediction of communication channel parameters through deep learning using aerial or satellite images. The system (1) of the invention can be used in planning of a base station in the telecommunication sector. The desired communication channel parameters of the desired region are obtained by inputting images taken from satellite or aerial vehicles into the deep learning network. Thus, the positioning of the location, height, density and power of the base stations can be decided.

Inventors:
GUNTURK, Bahadir Kursat (Beykoz/Istanbul, TR)
ATES, Hasan Fehmi (Beykoz/Istanbul, TR)
BAYKAS, Tunçer (Beykoz/Istanbul, TR)
Application Number:
TR2020/050951
Publication Date:
April 22, 2021
Filing Date:
October 16, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ISTANBUL MEDIPOL UNIVERSITESI (Istanbul, TR)
International Classes:
B64C39/02; G06K9/20; G06K9/66
Attorney, Agent or Firm:
SIMSEK, Meliha Merve (Bakirkoy/Istanbul, TR)
Download PDF:
Claims:
CLAIMS

1. A system, which provides fast and precise prediction of communication channel parameters through deep learning using aerial or satellite images comprising:

❖ at least one UAV (2) that flies by remote control and whose GPS coordinates can be recorded instantly,

❖ at least one camera (3) placed on the UAV (2), which can be controlled with a remote control and enables the images of the region to be recorded,

❖ at least one transmitter (4) that enables the transmission of the images taken through the camera (3), and

❖ at least one receiver (5) that receives the signals transmitted from the transmitter (4) located on land characterized in that it comprises at least one control unit (6) o that instantly records the GPS coordinates of the UAV (2), o that provides the power required for the operation of the receiver (5) and receives the images obtained by the receiver (5), o that makes the channel parameter prediction by inserting the images directly into the deep learning network, o that uses regression in which the deep learning network is trained to give the exact value of the desired channel parameter or the parameter classification method in which channel parameter is divided into classes according to certain intervals or the model classification method in which one of the predetermined model classes is selected for channel parameter in the prediction process of channel parameter.

2. System (1) according to claim 1, characterized in that it comprises a software- based transmitter (4) placed on the UAV (2); a transmitter (4) having at least one battery (41) providing electrical energy, at least one voltage converter (42) that provides the appropriate operating voltage by converting the electrical energy obtained from the battery (41), at least one power amplifier (43) that provides amplification of output signal power and at least one USRP E310 (44), which is a software-based radio interface and provides data communication in a certain frequency range.

3. System (1) according to claim 2, characterized in that it comprises USRP E310 (44) which sends signals between 70MHz and 6GHz with a bandwidth of up to 56MHz, suitable for 2x2 MIMO communication, performs 10MS/s sampling, having 10dBm output power, having a dipole antenna at its outlet and in which the GNU Radio system is used for programming.

4. System (1) according to claim 2 or 3, characterized in that; the transmitter (4) comprises a power amplifier (43) which increases output power to 30dBm and thus increases communication range.

5. System (1) according to any one of the claims 2 to 4, characterized in that it comprises a voltage amplifier (43) operating with 15 V, and USRP E310 (44).

6. System (1) according to any one of the claims 2 to 5, characterized in that it comprises a battery (41) having 11.1 V output and voltage converter (42) which converts 11.V obtained from the battery (4) into 15 V and thus provides the electrical energy required for the operation of the voltage amplifier (43) and the USRP E310 (44).

7. System (1) according to any one of the claims 2 to 6, characterized in that it comprises a UAV (2) carrying a payload up to 10 kg and whose GPS coordinates are recorded instantly by the control unit (6).

8. System (1) according to any one of the preceding claims, characterized in that it includes a receiver (5) with USRP B210 that takes power directly from the control unit (6) and transfers the data to the control unit (6).

9. System (1) according to claim 8, characterized in that it comprises the control unit (6) which records the signals sent from USRP B210 via GNU Radio included therein, corrects the signals against the frequency shift by means of its software with technical and scientific calculations including numerical calculation, graphical data representation and programming, synchronizes these signals and finds the channel impulse response by correlating with the previously recorded sample signal.

10. System (1) according to any one of the preceding claims, characterized in that it comprises a control unit (6) that can use two dimensional satellite maps and 3-dimensional models corresponding to these regions, wherein the control unit (6) which is able to obtain data set composed of channel parameters by making simulations with the "ray tracing" approach to 3-dimensional models, selects a deep learning architecture and trains architecture using the data set and consists of parts used in deep learning networks such as fully connected artificial neural networks, nonlinear units.

11. System (1) according to any one of the preceding claims, characterized in that it comprises a control unit (6) which performs deep learning based operations in correlating channel parameters with image data, creates a large training set in these operations and uses the PlaceMaker add-in running on Google SketchUp providing 3-dimensional (3D) models of various cities to obtain a large data set and in addition to these 3D models, obtains 2-dimensional (2D) satellite images of the same regions and simulates the 3D models with “ray tracing” for the desired communication scenario and obtains the channel parameters by means of its software.

12. System (1) according to any one of the preceding claims, characterized in that it comprises a control unit (6) which applies log-normal shadowing model as a channel model to the power losses between the transmitter (4) and the receiver (5), having the formula PL = PL(dO) + 10nlog(d/d0) + in which PL (path loss) indicates the power loss at the distance (d) from the receiver (5), PL (dO) indicates the power loss at the reference distance (dO) from the receiver (5), n indicates the power loss parameter, indicates the random variable expressing the standard deviation from the model s.

13. System (1) according to claim 12, characterized in that it comprises a control unit (6) which fits the log-normal shading model to the power losses obtained by means of its software, obtains the n and s values of the region for each image and thus obtains n and s values for each image region as a data set.

14. System (1) according to one of claims 12 or 13, characterized in that it comprises a control unit (6) which considers (h,s) prediction as a regression problem and trains the deep architectures predicting these values.

15. System (1) according to any one of claims 12 to 14, characterized in that it comprises a control unit (6) which divides the values of (h,s) into a certain number of sub-intervals by quantization, formulates as a classification problem, predicts in which sub-range the (h,s) parameters of each image are located, and tests the alternative deep network architectures for this.

16. System (1) according to any one of claims 12 to 15, characterized in that it comprises a control unit (6) which obtains a 2-dimensional path loss map by quantizing the path loss values obtained by simulations for different receivers (5) placed regularly on the earth, provides the prediction of the path loss for each receiver (5) by training the deep architecture to learn this map and also tries to learn the path loss distribution together with the (h,s) parameters by obtaining the histogram of the path losses in the region.

17. System (1) according to any one of claims 12 to 16, characterized in that it comprises a control unit (6), wherein the control unit (6) changes the last fully connected (fc) layer of the VGG-16 architecture to give a single output for n and s values, trains separate architectures for these two parameters, uses 70% of the data set for training purposes, and allocates the remaining 30% for testing purposes and uses the squared error of the predicted parameter as a loss function in the training of deep architecture.

18. System (1) according to any one of claims 12 to 17, characterized in that it comprises a control unit (6) which uses deep networks for (h,s) classification, converts the problem into a classification problem by giving the ranges of (h,s) values, quantizes (h,s) values into various sub-intervals using the formulas and assigns a class label to each interval.

19. System (1) according to claim 18, characterized in that it comprises a control unit (6) which tests the ResNet-50 architecture together with the VGG-16 for channel parameter prediction, which is transformed into a classification problem, by means of this labeling, arranges the deep network architecture to output the required number of classes at its last layer, trains and tests separate architectures for n and s.

20. System (1) according to any one of claims 18 or 19, characterized in that it comprises a control unit (6) which, as an alternative to VGG-16 and ResNet-50 based architectures, tests MobilNet architecture which has been very successful in object detection/recognition and in which attributes come from convolutional networks at different levels.

21. System (1) according to any one of claims 12 to 20, characterized in that it comprises a control unit (6) which uses deep networks for predicting path loss map, learns the path loss values point by point (everywhere on the map) using this rich information since it has path loss values for all the receivers (5) on a regular grid in its image, accordingly, creates a 2-dimensional path loss map for each region by quantizing the path loss (PL) values and tries to train deep network architecture with these maps.

22. System (1) according to claim 21, characterized in that; it comprises a control unit (6) which determines that the PL values are in range of in a|| images, divides the PL values into 25 sub-intervals in multiples of 10 with the obtains a path loss map by placing the values on a 2-dimensional matrix according to the location of each receiver (5), considers these values as the class label of the relevant receiver (5) and considers the problem desired to be solved as the classification of each receiver (5) and as a result, transforms the problem into the problem of classifying the pixels corresponding to the receiver (5) locations in the image.

23. System (1) according to any one of claims 21 to 22, wherein; it comprises a control unit (6) which uses FCN-8s (Fully Convolutional Network) architecture providing semantic segmentation of the image by classifying the pixels, adapts the relevant layers of the FCN-8s architecture to result in 25 classes and interpolates the input image and output maps such that they have equal dimensions.

24. System (1) according to any one of claims 21 to 23, wherein; it comprises a control unit (6) which, due to the difficulty of point prediction of the PL values at the receiver (5) level, provides prediction of more summary information at the image level regarding the path loss, for this purpose, obtains histogram of PL values for each image and tries to predict these histograms, generates an 8-bin histogram in the range [10,100] for the value and tries to learn the histogram along with the n parameter by adding an additional layer to the deep architectures it uses.

25. System (1) according to any one of claims 21 to 24, wherein; it comprises a control unit (6) which uses the SoftMax loss function to learn the 4-class n value while it learns probabilities with the Sigmoid Cross Entropy loss function and expresses the total loss function as the weighted sum of these two values by the formula

26. System (1) according to any one of the preceding claims, wherein; it includes a control unit (6) which takes an image coming from a camera (3) different from the training data set, uses intensity mapping function (IMF) which does not require registration and is only based on the distribution of pixel values, obtains the IMF by using an image from the training data set for the image coming from a different satellite (DigitalGlobe) and brings the image to the same color space as the training data and thus, ensures that image is of the same "standard" with the images used in the training phase.

27. System (1) according to any one of the preceding claims, wherein; it includes a control unit (6) which also works on the direct prediction of the path loss distribution (histogram) of the region as an alternative to the log-normal shading model, uses VGG-16 and Resnet-50 architectures for histogram prediction and uses the cross-entropy loss function for training the deep architecture which is shown as formula in which Represents the actual histogram values and represents the deep network output and examines the total squared error (TSE) value between the predicted and real histograms in the evaluation of the performance of the trained architectures.

Description:
A SYSTEM PROVIDING PREDICTION OF COMMUNICATION CHANNEL

PARAMETERS

Technical Field

This invention is related to a system that provides fast and precise prediction of communication channel parameters through deep learning using aerial or satellite images.

Prior Art

In cellular network systems, users communicate with each other through base stations. While designing cellular networks, the most important parameters determining the coverage area and service quality are the locations, heights and signal strength of the base stations to be installed. In addition, in order not to interrupt the communication of mobile users, the handover between signals from different base stations should be well designed and also the interference between the base stations should be kept at acceptable levels. Ever increasing number of users, demand for higher data rates and the introduction of wireless sensor networks creates the need for more base stations.

While planning the base station locations, knowing the wireless communication channel model of the region, is important. It is known that the channel model is affected by the ground topography and settlement characteristics (such as building heights and density). Optimum method to determine those characteristics is making field measurements, measurements are taken by moving the receivers and transmitters in the desired area. The process of measurements are very costly and time consuming.

Another effective method to determine the channel model is to create a three- dimensional model of the region and to run computer simulations.

In the simulation ("Ray tracing") process, first of all, a three-dimensional model of the region is created. Aerial stereo images or Lidar data are among the methods that can be used to create a three-dimensional model. The channel model of the region is created by inputting the created three-dimensional models into the simulation software. Also the properties of buildings should be added. The process of creating a three- dimensional model is a costly and time consuming.

The most important wireless communication channel parameters such as path loss, delay spread, Doppler spread and angular spreading vary according to the ground topography and settlement characteristics. In the literature, there are various studies on how these channel parameters can change according to the ground topography and settlement characteristics. However, there is no study to directly predict these parameters from aerial images.

As a result, it was required to make an improvement in the relevant technical field due to the above-mentioned problems and the insufficiency of the existing solutions.

The Aim of the Invention

The invention is created by being inspired by the current situations and aims to solve the problems mentioned above.

The aim of this invention is to provide the prediction of the communication channel parameters from the aerial or satellite images by using deep learning.

Another object of the present invention is to provide a fast and precise prediction of communication channel parameters.

The inventive system can be used by wireless service providers and companies and institutions with their own communication network (Armed Forces, Police, municipalities, mining and forestry companies, university campuses)

By means of the inventive system, the base station planning (base station locations, heights, signal strength) can be optimized. Possible future problems (such as communication interruption, the need to add a new base station) can be prevented. In cases of temporary and emergency base station installations (such as natural disaster), optimal positioning can be performed quickly.

By means of the inventive system, the communication channel model of the desired region will be predicted using only images. For this, the deep learning networks are used. The relationship between the image of a region and channel model parameters is learned via deep learning networks. This learning phase is done using regions whose channel parameters are known and have aerial (or satellite) images. After the deep learning networks are created and trained, the channel parameters are predicted by inputting the image of any region.

The structural and characteristic features and all the advantages of the invention will be understood more clearly by means of the figure given below and the detailed description written by referring to this figure, and therefore the evaluation should be made by taking this figure and detailed description into consideration.

Figure to Help Understand the Invention

Figure 1 , is the schematic view of the system of the invention.

Description of the Part References 1. System

2. UAV (Unmanned Aerial Vehicle - Drone)

3. Camera

4. Transmitter 41. Battery 42. Voltage Converter

43. Power Amplifier

44. USRP E310 (Universal Software Radio Peripheral)

5. Receiver

6. Control unit

Detailed Description of the Invention

In this detailed description, the preferred embodiments of the system (1) of the invention are explained only for a better understanding of the subject matter. This invention is related to a system (1) that provides fast and precise prediction of communication channel parameters through deep learning using aerial or satellite images.

The system (1) of the invention can be used in planning of base station in the telecommunication sector. The desired communication channel parameters of the desired region are obtained by inserting images taken from satellite or air vehicles into the deep learning network. Thus, the positioning of the location, height, density and power of the base stations can be decided.

The system (1) of the invention which is presented in schematic representation in Figure 1 preferably comprises;

❖ at least one UAV (2) that flies by remote control and whose GPS coordinates can be recorded instantly,

❖ at least one camera (3) placed on the UAV (2), which can be controlled with a remote control and enables the images of the region to be recorded,

❖ at least one transmitter (4) that enables the transmission of the images taken through the camera (3),

❖ at least one receiver (5) that receives the signals transmitted from the transmitter (4) located on ground,

❖ at least one control unit (6) o that instantly records the GPS coordinates of the UAV (2), o that provides the power required for the operation of the receiver (5) and receives the images obtained by the receiver (5), o that makes the channel parameter prediction by inputting the images directly into the deep learning network, o that uses regression in which the deep learning network is trained to give the exact value of the desired channel parameter or the parameter classification method in which channel parameter is divided into classes according to certain intervals and or the model classification method in which one of the predetermined model classes is selected for channel parameter in the prediction process of channel parameter.

The control unit (6) predicts the channel parameter by means of regression, parameter classification or model classification. In regression, the deep learning network is trained by the control unit (6) in order to give the exact value of the desired channel parameter.

In the parameter classification, the channel parameter is divided into classes according to certain intervals by the control unit (6) and the deep learning network is trained to give the correct class. For example, if the relevant channel parameter is x; the x parameter can be classified as in Table 1 below.

Table 1 - Classification of channel parameters according to certain value ranges (t 1 , t 2 , t 3 , ...)

What is expected from the deep learning network is the determination of the correct parameter class.

In the model classification, one of the predetermined model classes is selected for the channel parameter. For example, a classification can be made as in Table 2.

Table 2 - The selection of channel model class

The control unit (6) trains the deep learning network. For this, first of all, there is a need for a dataset, including the image and the channel parameter value (or class) corresponding to the image. This dataset is obtained by the control unit (6) by various methods. For example, the control unit (6) can use two-dimensional satellite maps and 3-dimensional models corresponding to these regions. The control unit (6) is able to obtain channel parameters by making simulations with the "ray tracing" approach to 3- dimensional models. Secondly, the control unit (6) selects a deep learning architecture and trains the selected architecture using the data set. The architecture consists of parts used in deep learning networks such as filters, fully connected artificial neural networks, nonlinear units.

In the system (1) of the invention, measurement equipment is firstly installed. The measurement equipment consists of the transmitter (4) placed on the UAV (2), the camera (3) located on the UAV (2) and recording the images and the receiver (5) on the ground.

The software-based transmitter (4) placed on the UAV (2) comprises at least one battery (41) providing electrical energy, at least one voltage converter (42) that provides the appropriate operating voltage by converting the electrical energy obtained from the battery (41), at least one power amplifier (43) that provides amplification of output signal power and at least one USRP E310 (44), which is a software-based radio interface and provides data communication in a certain frequency range. The USRP E310 (44) can send signals between 70MHz and 6GHz, with a bandwidth of up to 56MHz and is suitable for 2x2 MIMO communication. The USRP E310 (44) performs 10MS/s sampling and has 10dBm output power. By means of the power amplifier (43), the output power of the transmitter (4) is increased to 30dBm and thus the communication range is increased. The voltage amplifier (43) and USRP E310 (44) operate with 15 V. The battery (41) has an 11.1 V output. 11.1 V obtained from the battery (4) is converted to 15 V by means of the voltage converter (42). Thus, the electrical energy required for the operation of the voltage amplifier (43) and USRP E310 (44) is obtained. There is a dipole antenna at the output of USRP E310 (44). The GNU Radio system was used to program the USRP E310 (44). UAV (2) can carry up to 10 kg of payload. The GPS coordinates of the UAV (2) are recorded instantly by the control unit (6).

USRP B210 is used in the receiver (5). USRP B210 receives power directly from the control unit (6) and transfers the data to the control unit (6). The control unit (6) records the signals sent from USRP B210 via GNU Radio included therein. Then, the control unit (6) corrects the signals against the frequency shift, synchronizes these signals and finds the channel impulse response by correlating with the previously recorded sample signal by means of its software with technical and scientific calculations including numerical calculation, graphical data representation and programming.

The control unit (6) correlates the image data with the channel parameters. Due to its higher performance than traditional machine learning methods in many applications, the diversity of deep learning architectures and the widespread use of open source platforms/codes in this field, the control unit (6) included in the system (1) of the invention performs deep learning-based operations.

In deep learning network-based classifiers, attributes can be extracted from scratch and in a problem-specific way, as well as attributes obtained from ready networks such as VGG, ResNet, MobilNet can be used. These networks are the networks which are obtained by training millions of images using datasets such as ImageNet, which can produce attributes at various levels (from low-level attributes such as edges, to object- level attributes). These ready networks are applied to the desired problem by adding new network layers at the end, retraining according to the problem with "transfer" training if desired.

An important factor in achieving success with deep learning networks is the presence of a large educational set. The control unit (6) uses the PlaceMaker add-in running on Google SketchUp to obtain a large data set. This add-in offers 3-dimensional (3D) models of various cities. In addition to these 3D models, the control unit (6) can also obtain 2-dimensional (2D) satellite images of the same regions. The control unit (6) obtains 2D images of 1000 regions of 1.8km x 1.8km in size and 3D models corresponding to these images by using this add-in.

Then, the control unit (6) simulates the 3D models with “ray tracing” for the desired communication scenario and obtains the channel parameters by means of its software. As the first scenario, the transmitter (4) was positioned at a frequency of 900MHz and at an altitude of 300 meters in the center of the region. The receivers (5) were placed 1.5 meters above the ground and regularly spaced 20 meters apart.

The control unit (6) applies the log-normal shadowing model as a channel model to the power losses between the transmitter (4) and the receiver (5) obtained as a result of the simulation.

In this model, PL (path loss) indicates the power loss at the distance d from the receiver (5), PL (dO) indicates the power loss at the reference distance (dO) from the receiver (5), n indicates the power loss parameter, and indicates the random variable expressing the standard deviation from the model The n and s values of a region are very important in terms of the channel characteristic of that region and the communication system to be used. The control unit (6) attempts to predict these two channel parameters at the first stage. For this purpose, the control unit (6) overlays the log-normal shading model to the power losses obtained by means of its software and obtains the n and values of the region for each image. Therefore, the control unit (6) obtains 1000 images as a data set and n and s values for each image region.

In the next stage, the control unit (6) works on various classifier architectures for the data set it obtains. The control unit (6) focuses on three different approaches in this study.

In the first of these approaches, the control unit (6) considers prediction as a regression problem and trains deep architectures that predict these values.

In the second of these approaches, the control unit (6) divides the values of into a certain number of sub-intervals by quantization, formulates as a classification problem, predicts in which sub-range the (h,s) parameters of each image are located., and tests the alternative deep network architectures for this.

In the third of these approaches, the control unit (6) obtains a 2-dimensional path loss map by quantizing the path loss values obtained by simulations for different receivers (5) placed regularly on the earth. The control unit (6) provides the prediction of the path loss for each receiver (5) by training the deep architecture to learn this map. In addition, the control unit (6) tries to learn the path loss distribution together with the parameters by obtaining the histogram of the path losses in the region.

The control unit (6) uses deep regression networks for (h,s) prediction. There are many successful deep network architectures for image classification in the literature. For regression problems, it is observed that, instead of developing different architectures, in general the existing classification networks are adapted. For this adaptation it is sufficient to change the last layers of the network. The control unit (6) implements the VGG-16 architecture in its operation.

The control unit (6) changes the last fully connected (fc) layer of the VGG-16 architecture to give a single output for n and values. The control unit (6) trains separate architectures for these two parameters. The control unit (6) uses 70% of the data set for training purposes and allocates the remaining 30% for testing purposes. The control unit (6) uses the squared error of the predicted parameter as a loss function in the training of deep architecture.

In Table 3, the mean square error (MSE) obtained for the test set was presented by in proportion to the variance (var) of both parameters. As can be seen, although the n value square error is considerably smaller, the prediction performance for the parameter is slightly lower.

Table 3 - prediction performance for VGG-16 The control unit (6) uses deep networks for (h,s) classification. The control unit (6) converts the problem into a classification problem by giving the ranges of (h,s) values. Although it is not as precise as the regression, prediction of ranges is sufficient to determine the valid communication pattern in a particular region. The control unit (6) quantizes (h,s) values into various sub-intervals using the formulas below and assigns a class label to each interval. I(x) can be considered as a class label for the x parameter. With this labeling, the control unit (6) also tests the ResNet-50 architecture together with the VGG-16 for channel parameter prediction, which is transformed into a classification problem. The control unit (6) arranges the deep network architecture to output the required number of classes at its last layer. The control unit (6) trains and tests separate architectures for n and s. In Table 4, the total classification accuracy obtained for the test set and the average of the class-based accuracies are presented. While the accuracy of up to 90% for the n value is quite successful, lower results for s were obtained. Error matrices for ResNet- 50 are presented in Tables 5 and 6.

Table 4 - (n, s) classification performance

Table 5 - Error matrix for n in ResNet-50

Table 6 - Error matrix for s in ResNet-50

As an alternative to VGG-16 and ResNet-50 based architectures, the control unit (6) tests another architecture that has been very successful in object detection/recognition. The control unit (6) uses MobileNet in this architecture. In MobileNet, attributes come from convolutional networks at different levels. The error matrices obtained with this architecture are presented at Tables 7 and 8. The control unit (6) achieves an accuracy rate of 91 % for the parameter n and 79% for the parameter s with this architecture. Table 7 - Error matrix for n in MobilNet

Table 8 - Error matrix for s in MobilNet

The control unit (6) uses deep networks for the path loss map. In the approaches presented for predicting (h,s) parameters in the section described until this stage of the specification, the control unit (6) tries to learn only two parameter values over the whole image. Since the control unit (6) has path loss values for all the receivers (5) on a regular grid in its image, it learns path loss values point by point (everywhere on the map) using this rich information. Accordingly, the control unit (6) creates a 2- dimensional path loss map for each region by quantizing the path loss (PL) values and tries to train deep network architecture with these maps.

The control unit (6) determines that the PL values are in range of j n all images. The control unit (6) divides the PL values into 25 sub-intervals in multiples of 10 with the following formula.

The control unit (6) obtains a path loss map by placing the values on a 2- dimensional matrix according to the location of each receiver (5). The control unit (6) considers these values as the class label of the relevant receiver (5) and considers the problem desired to be solved as the classification of each receiver (5). As a result, the control unit (6) transforms the problem into the problem of classifying the pixels corresponding to the receiver (5) locations in the image (Receiver (5) locations corresponding to the building interior were excluded from the classification and labeled as In the literature, there are very successful deep network architectures proposed for semantic segmentation of images by the classification of pixels. One of the leaders and most successful of these is FCN-8s (Fully Convolutional Network) architecture.

The control unit (6) adapts the relevant layers of the FCN-8s architecture to result in 25 classes. The input image and output map are interpolated by the control unit (6) such that they have equal dimensions.

Due to the difficulty of point prediction of the PL values at the receiver (5) level, the control unit (6) provides prediction of more summary information at the image level regarding the path loss. For this purpose, the control unit (6) generates a histogram of PL values for each image and tries to predict these histograms. The control unit (6) creates an 8-bin histogram in the range [10,100] for the value The control unit (6) tries to learn the histogram along with the n parameter by adding an additional layer to the deep architectures it uses. This approach is called multi-task learning in the literature. It is aimed to increase the learning performance for each task by learning the related tasks together in multi-task learning.

While the control unit (6) uses the SoftMax loss function to learn the 4-class n value, it learns probabilities with the Sigmoid Cross Entropy loss function. The control unit (6) expresses the total loss function as the weighted sum of these two values.

The results obtained with VGG-16 and ResNet are presented in Table 9 and 10 in comparison with the results obtained in Multi-task architecture and Normal architecture. As can be seen, learning the histogram information provides an increase of 1-5% in the n-value classification performance.

Table 9 - n classification performance for multi-task architecture

Table 10 - classification performance for multi-task architecture

The point to be considered while inputting a new image into the classifier is that the image is of the same "standard" with the images used in the training phase. One of the most important problems arising from the use of different camera (3) is that the color tones are different due to the different sensor and light balance. This problem can be solved with an intensity mapping function (IMF) method that does not require registration and is only based on the distribution of pixel values. Accordingly, an image coming from a camera (3) different from the training data set is taken by the control unit (6) to the same color space as the training data set by the IMF. The control unit (6) obtains the IMF by using an image from the training dataset for the image coming from a different satellite (DigitalGlobe) and brings the image to the same color space as the training data.

The control unit (6) also works on the direct prediction of the path loss distribution (histogram) of the region as an alternative to the log-normal shading model. In the path loss map, dark blue pixels are regions that correspond to the interior of the building. Path loss increases in the region where high buildings are located. Low path loss occurs in the area with low buildings.

The control unit (6) uses VGG-16 and Resnet-50 architectures for histogram prediction. The control unit (6) uses the cross-entropy loss function for training the deep architecture.

Here represents the actual histogram values and represents the deep network output. The control unit (6) examines the total squared error (TSE) between the predicted and real histograms in the evaluation of the performance of the trained architectures.

In Table 11 , the mean total square error (TSE) obtained for the test set was presented.

Table 11 - Histogram prediction performance for VGG-16 and ResNet-50