Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEEP LEARNING BASED UPLINK-DOWNLINK CHANNEL COVARIANCE MATRIX MAPPING IN FDD MASSIVE MIMO SYSTEM
Document Type and Number:
WIPO Patent Application WO/2023/199288
Kind Code:
A1
Abstract:
Estimating channel characteristics for a channel between a first device having a plurality of antenna elements and a second device. An example method comprises estimating (720) a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band, and transforming (730) the first channel covariance to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry of the first device. The method further comprises mapping (740) the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a machine-learning-derived mapping function. The method still further comprises transforming (750) the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band.

Inventors:
MIRZAEI JAVAD (CA)
ADVE RAVIRAJ (CA)
SHAHBAZPANAHI SHAHRAM (CA)
EL-KEYI AMR (CA)
SEDIQ AKRAM BIN (CA)
ABOU-ZEID HATEM (CA)
Application Number:
PCT/IB2023/053853
Publication Date:
October 19, 2023
Filing Date:
April 14, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L25/02
Other References:
LINFU ZOU ET AL: "Deep Learning Based Downlink Channel Covariance Estimation for FDD Massive MIMO Systems", IEEE COMMUNICATIONS LETTERS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 25, no. 7, 26 April 2021 (2021-04-26), pages 2275 - 2279, XP011865343, ISSN: 1089-7798, [retrieved on 20210709], DOI: 10.1109/LCOMM.2021.3075725
BANERJEE BITAN ET AL: "Towards FDD Massive MIMO: Downlink Channel Covariance Matrix Estimation Using Conditional Generative Adversarial Networks", 2021 IEEE 32ND ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), IEEE, 13 September 2021 (2021-09-13), pages 940 - 946, XP034004939, DOI: 10.1109/PIMRC50174.2021.9569379
KHALILSARAI MAHDI BARZEGAR ET AL: "Uplink-Downlink Channel Covariance Transformations and Precoding Design for FDD Massive MIMO", 2019 53RD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, IEEE, 3 November 2019 (2019-11-03), pages 199 - 206, XP033750905, DOI: 10.1109/IEEECONF44664.2019.9049041
E. BJRNSONE. G. LARSSONT. L. MARZETTA: "Massive MIMO: ten myths and one critical question", IEEE COMMUN. MAG., vol. 54, no. 2, 2016, pages 114 - 123
F. RUSEKD. PERSSONB. K. LAUE. G. LARSSONT. L. MARZETTAO. EDFORSF. TUFVESSON: "Scaling up MIMO: Opportunities and challenges with very large arrays", IEEE SIGNAL PROCESS. MAG., vol. 30, no. 1, January 2013 (2013-01-01), pages 40 - 60
L. MIRETTIR. L. CAVALCANTES. STANCZAK: "FDD massive MIMO channel spatial covariance conversion using projection methods", IEEE INT. CONF. ACOUSTICS, SPEECH AND SIGNAL PROCESS. (ICASSP, April 2018 (2018-04-01), pages 3609 - 3613
M. JORDANA. DIMOFTEX. GONGG. ASCHEID: "Conversion from uplink to downlink spatio-temporal correlation with cubic splines", IEEE 69TH VEHICULAR TECH. CONF., April 2009 (2009-04-01), pages 1 - 5, XP031474409
S. HAGHIGHATSHOARM. B. KHALILSARAIG. CAIRE: "Multi-band covariance interpolation with applications in massive MIMO", IEEE INT. SYMP. INF. THEORY (ISIT, June 2018 (2018-06-01), pages 386 - 390
L. MIRETTIR. L. CAVALCANTES. STA: "Downlink channel spatial covariance estimation in realistic FDD massive MIMO systems", IEEE GLOBAL CONF. SIGNAL AND INF. PROCESS, November 2018 (2018-11-01), pages 161 - 165
P. DONGH. ZHANGG. Y. LI: "Machine learning prediction based CSI acquisition for FDD massive MIMO downlink", PROC. IEEE GLOBAL COMMUN. CONF. (GLOBECOM, 2018, pages 1 - 6, XP033519491, DOI: 10.1109/GLOCOM.2018.8647328
M. S. SAFARIV. POURAHMADIS. SODAGARI: "Deep UL2DL: Data-driven channel knowledge transfer from uplink to downlink", IEEE OPEN J. VEH. TECH., vol. 1, 2020, pages 29 - 44, XP011767977, DOI: 10.1109/OJVT.2019.2962631
"Deep learning for TDD and FDD massive MIMO: Mapping channels in space and frequency", IEEE ASILOMAR CONF. SIGNAL, SYST., COMPUTE., 2019, pages 1465 - 1470
Y. YANGF. GAOG. Y. LIM. JIAN: "Deep learning-based downlink channel prediction for FDD massive MIMO system", IEEE COMMUN. LETT., vol. 23, no. 11, 2019, pages 1994 - 1998
Y. YANGF. GAOZ. ZHONGB. AIA. ALKHATEEB: "Deep transfer learning based downlink channel prediction for FDD massive MIMO systems", IEEE TRANS. COMMUN., 2020, pages 1 - 1
M. BARZEGAR KHALILSARAIS. HAGHIGHATSHOARX. YIG. CAIRE: "FDD massive MIMO via UL/DL channel covariance extrapolation and active channel sparsification", IEEE TRANS. WIRELESS COMMUN., vol. 18, no. 1, 2019, pages 121 - 135, XP011696569, DOI: 10.1109/TWC.2018.2877684
B. BANERJEER. C. ELLIOTTW. A. KRZYMIEH. FARMANBAR: "Towards FDD massive MIMO: Downlink channel covariance matrix estimation using conditional generative adversarial networks", IEEE ANNUAL INT. SYMP. PERSONAL, INDOOR AND MOBILE RADIO COMMUN. (PIMRC, September 2021 (2021-09-01), pages 940 - 946, XP034004939, DOI: 10.1109/PIMRC50174.2021.9569379
A. DECURNINGEM. GUILLAUDD. T. M. SLOCK: "Channel covariance estimation in massive MIMO frequency division duplex systems", IEEE GLOBECOM WORKSHOPS (GC WKSHPS, December 2015 (2015-12-01), pages 1 - 6
3GPP: "5G; study on channel model for frequencies from 0.5 to 100 GHz", 3GPP TR 38.901, 14 May 2017 (2017-05-14)
O. RONNEBERGERP. FISCHERT. BROX: "U-net: Convolutional networks for biomedical image segmentation", CORR, 2015
P. ISOLAJ. ZHUT. ZHOUA. A. EFROS: "Image-to-image translation with conditional adversarial networks", CORR, 2016
D. P. KINGMAJ. BA: "Adam: A method for stochastic optimization", INT. CONF., May 2015 (2015-05-01)
Attorney, Agent or Firm:
HOMILLER, Daniel P. (US)
Download PDF:
Claims:
CLAIMS What is claimed is: 1. A method for estimating channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element, the method comprising: estimating (720) a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band; transforming (730) the first channel covariance matrix to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry; mapping (740) the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a mapping function that estimates a mapping of virtual domain channel covariance in the first direction to virtual domain channel covariance in a second direction, from the first device towards the second device; transforming (750) the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band. 2. The method of claim 1, wherein the mapping function is based on a deep neural network. 3. The method of claim 1 or 2, wherein the mapping function is based on the generator of a generative adversarial network, GAN. 4. The method of claim 3, wherein the mapping function is based on the U-net architecture. 5. The method of any one of claims 1-4, further comprising transmitting (760) a signal to the second device from the first device, in the second frequency band, using antenna weights determined from the second channel covariance matrix. 6. The method of any one of claims 1-5, wherein the method comprises, prior to said transforming steps and mapping step: training (710) the mapping function using a deep generative model, multiple estimates of the channel covariance in the first direction, and multiple estimates of the channel covariance in the second direction.

7. The method of claim 6, wherein said training (710) comprises using a generative adversarial network, GAN, the mapping function corresponding to the generator network of the GAN. 8. An apparatus for estimating channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element, the apparatus comprising processing circuitry (1002) configured to: estimate a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band; transform the first channel covariance matrix to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry; map the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a mapping function that estimates a mapping of virtual domain channel covariance in the first direction to virtual domain channel covariance in a second direction, from the first device towards the second device; transform the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band. 9. The apparatus of claim 8, wherein the mapping function is based on a deep neural network. 10. The apparatus of claim 8 or 9, wherein the mapping function is based on the generator of a generative adversarial network, GAN. 11. The apparatus of claim 10, wherein the mapping function is based on the U-net architecture. 12. The apparatus of any one of claims 8-11, wherein the processing circuitry (1002) is further configured to transmit a signal to the second device from the first device, in the second frequency band, via the plurality of antennas, using antenna weights determined from the second channel covariance matrix. 13. The apparatus of any one of claim 8-12, wherein the processing circuitry (1002) is further configured to, prior to said transforming and mapping: train the mapping function using a deep generative model, multiple estimates of the channel covariance in the first direction, and multiple estimates of the channel covariance in the second direction. 14. The apparatus of claim 13, wherein the processing circuitry (1002) is configured to perform the training using a generative adversarial network, GAN, wherein the mapping function corresponding to the generator network of the GAN. 15. The apparatus of claim 13 or 14, wherein the apparatus comprises a base station (1000) comprising first processing circuitry (1002) configured to carry out said estimating, transforming, and mapping operations, and a second node comprising second processing circuitry configured to carry out said training. 16. A computer program product comprising program instructions for execution by processing circuitry, the program instructions being configured to cause the processing circuitry to estimate channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element by: estimating a first set of one or more parameters for the channel, based on training symbols transmitted from the second device to the first device; estimating a second set of one or more parameters for the channel, based on training symbols or reference signals transmitted from the first device to the second device and based on the first set of one or more parameters. 17. A computer-readable medium comprising, stored thereupon, the computer program product of claim 16.

Description:
DEEP LEARNING BASED UPLINK-DOWNLINK CHANNEL COVARIANCE MATRIX MAPPING IN FDD MASSIVE MIMO SYSTEM TECHNICAL FIELD The present disclosure is generally related to wireless communications networks and is more particularly related to the use of deep learning for estimating radio channel covariance in such networks. BACKGROUND Figure 1 illustrates a simplified wireless communication system, with a user equipment (UE) 102 that communicates with one or multiple access nodes 103, 104, which in turn are connected to a network node 106. The access nodes 103, 104 are part of the radio access network (RAN) 100. The network node 106 may be, for example, part of a core network. For wireless communication systems confirming to the 3rd Generation Partnership Project (3GPP) specifications for the Evolved Packet System (EPS), also referred to as Long Term Evolution (LTE) or 4G, as specified in 3GPP TS 36.300 and related specifications, the access nodes 103, 104 correspond typically to base stations referred to in 3GPP specifications as Evolved NodeBs (eNBs), while the network node 106 corresponds typically to either a Mobility Management Entity (MME) and/or a Serving Gateway (SGW). The eNB is part of the RAN 100, which in this case is the E-UTRAN (Evolved Universal Terrestrial Radio Access Network), while the MME and SGW are both part of the EPC (Evolved Packet Core network). The eNBs are inter-connected via the X2 interface, and connected to EPC via the S1 interface, more specifically via S1-C to the MME and S1-U to the SGW. On the other hand, for wireless communication systems pursuant to 3GPP specifications for the 3GPP 5G System, 5GS (also referred to as New Radio, NR, or 5G), as specified in 3GPP TS 38.300 and related specifications, the access nodes 103, 104 correspond typically to base stations referred to as 5G NodeBs, or gNBs, while the network node 106 corresponds typically to either an Access and Mobility Management Function (AMF) and/or a User Plane Function (UPF). In this example, the gNB is part of the RAN 100, which in this case is the NG-RAN (Next Generation Radio Access Network), while the AMF and UPF are both part of the 5G Core Network (5GC). The gNBs are inter-connected via the Xn interface, and connected to 5GC via the NG interface, more specifically via NG-C to the AMF and NG-U to the UPF. Massive multiple-input multiple-output (MIMO) is a key technology in helping to meet the demands of next generation wireless technologies. This technology can be deployed in time-division duplex (TDD) mode, where the uplink and downlink occur in the same frequency band but at different time slots, as well as in frequency-division duplex (FDD) mode, where the uplink and downlink operate simultaneously on different frequency bands. To unlock the full potential of massive MIMO in both FDD and TDD, accurate channel state information (CSI) between the base station (BS) and the user equipment (UE) is required, so that the scheduling BS can take the fullest advantage of opportunities for spatial multiplexing. This disclosure focuses on the downlink channel covariance matrix estimation problem in realistic massive multiple-input multiple-output (MIMO) systems operating in the frequency- division duplexing (FDD) mode. Massive MIMO has already shown great potential in time- division duplexing (TDD) mode. However, the applicability of this technology to FDD systems is of great interest from both the academic and industry perspective. (See Ref.1.) Channel estimation in TDD massive MIMO systems relies on uplink-downlink reciprocity, the notion that the uplink and downlink channels are the same. In contrast to TDD-based systems, in FDD systems, due to the difference between the uplink and downlink frequencies, uplink-downlink reciprocity does not hold. Conventionally, downlink channel estimation is performed by the BS transmitting a sequence of pilot symbols; the user equipment (UE) collects the channel measurements and feeds back the information to BS for subsequent resource allocation and beamforming. This approach is not efficient in a massive MIMO setting, where the training suffers from a huge delay and a large pilot and feedback overhead. One of the key challenges in deploying massive MIMO in FDD-based systems is acquiring accurate estimates of downlink channel covariance matrices, for use in downlink beamforming. (See Ref.2.) In a conventional FDD system, the downlink channel covariance matrix is obtained using uplink pilot transmission or covariance feedback schemes. This approach is practical in earlier (prior to 5G) generations of wireless networks, where only a few antennas are used at the base station (BS), allowing for orthogonal pilots and small feedback overhead. However, in massive MIMO systems, due to the difficulties in obtaining completely orthogonal pilot patterns, as well as due to huge feedback overhead, this approach may not be applicable. Therefore, it is an urgent requirement in FDD massive MIMO systems to reduce the amount of pilot symbols needed for downlink channel covariance matrix estimation. The authors in Ref.3 proposed inferring the downlink channel covariance matrix from its uplink counterpart using a projection-based approach. In Ref.4, the authors proposed to convert the uplink spatio-temporal correlation to its downlink counterpart using spline interpolation and two-dimensional phase unwrapping techniques. The authors in Ref.5 propose to interpolate the downlink channel covariance matrix using its uplink counterpart. The main limitation of Ref.3 and its related works Ref.4 and Ref.5 is that they are based on simple channel models that do not meet the requirements of modern wireless system designs. For example, the algorithms are based on uniform linear arrays (ULA) and do not generalize to arrays with arbitrary geometries or with non-isotropic antennas. Furthermore, they do not consider propagation effects of 3D environments and, most importantly, dual- polarized antenna arrays. Considering a realistic channel model, the authors in Ref.6 propose to infer the downlink channel covariance matrix from its uplink pair using an algorithm based on the joint estimation of the angular power spectra for the vertical (V-APS) and for the horizontal (H- APS) polarizations. While their proposed algorithm requires joint estimate V-APS and H- APS via a convex feasibility problem, it assumes that the uplink and downlink channels share the same V-APS and H-APS. In a different approach, various forms of deep learning techniques have been considered in the literature, which mainly aim to reduce the feedback overhead in downlink channel and/or covariance matrix estimation. (See Refs.7-14.) Using deep neural networks, the authors in Ref.7 model the correlation between the BS antennas. Then, using the pilot observations across a subset of BS antennas, they regress the downlink channel across all antennas. While this approach is effective in reducing the downlink feedback overhead, the overhead is still significant for large antenna arrays. In Refs.9-11, a space-frequency technique is developed to find the mapping between uplink channel and its downlink counterpart. While downlink training is eliminated, a large training data set is required to account for many channel variations. Moreover, if the environment changes, the approach may require re-training the neural network for each user. These challenges have motivated the works in Ref.12 and Ref.14, to consider channel covariance matrices that tend to change much slower compared to channel matrices. Among these works, Ref.14 is model-dependent, and Ref.12 requires a huge dataset. To address these shortcomings the authors in Ref.13 proposed a conditional generative adversarial framework to find the mapping between the uplink and downlink. Nevertheless, further improvements are needed, with respect to such issues as convergence in training, stability, and generalizability. SUMMARY The solutions described herein also involve a conditional generative adversarial framework. However, these solutions differ from those described in Ref.13, in the sense that these solutions consider a realistic 3D channel model as specified in the 3GPP standards (Ref.15). Furthermore, the mapping in the presently disclosed solutions is performed in a virtual domain, which represents the channel using a much simpler structure without loss of information. Such a representation significantly improves the convergence in training, stability, and generalizability of the required mapping function. The uplink and downlink channel covariance matrices depend on various elements of the propagation environment, including the transmitter/receiver locations as well as the scatterers' positions. The challenge is that these dependencies are difficult to characterize analytically, as they normally involve many physical interactions and are unique to every environmental setup. Motivated by the fact that the physical propagation environment in both uplink and downlink communication is the same, the techniques described herein find the mapping from the uplink channel covariance matrix to its downlink counterpart. Specifically, the environment information is extracted from the uplink channel covariance to a latent domain, and then a mapping from this environment information to the corresponding downlink matrix is found. Learning this mapping can dramatically reduce the training overhead in FDD massive MIMO systems. To find the uplink-downlink mapping function, machine-learning tools may be exploited, such as U-Net, an architecture based on convolutional neural networks (CNNs). (See Ref. 16.) The approach described herein treats the covariance matrices as 2D images and learns the mapping function between the uplink and downlink covariance matrices. The U- Net or other machine-learning model may be trained adversarially, with the help of a discriminator. The training process is further elaborated upon later in this document. Once trained, the machine-learning model acts as a generator that maps the uplink covariance matrix to the downlink covariance matrix. An example method for estimating channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element comprises estimating a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band, and transforming the first channel covariance to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry. The method further comprises mapping the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a mapping function that estimates a mapping of virtual domain channel covariance in the first direction to virtual domain channel covariance in a second direction, from the first device towards the second device. This mapping function may be trained using a generative adversarial network, GAN, for example. The method still further comprises transforming the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band. Also described in detail herein are apparatuses configured to carry out all or parts of one or more of the techniques described herein. Of course, the present invention is not limited to the above features and advantages. Indeed, those skilled in the art will recognize additional features and advantages upon reading the following detailed description, and upon viewing the accompanying drawings. BRIEF DESCRIPTION OF THE FIGURES Figure 1 illustrates components of an example wireless network. Figure 2 shows the U-Net architecture. Figure 3 illustrates the training of the U-Net. Figure 4 shows the distribution of normalized mean square error (NMSE) versus changes in frequency, for training in the virtual domain. Figure 5 shows a comparison of quality of estimation of the downlink principal eigenvector , for various simulations. Figure 6 shows a comparison of downlink throughput, based on simulations. Figure 7 illustrates an example method, according to some embodiments. Figure 8 shows an example communication system. Figure 9 and Figure 10 show an example UE and network node, respectively. Figure 11 shows components of a host. Figure 12 illustrates an example of a virtualization environment. Figure 13 shows communications between a host, network node, and UE. DETAILED DESCRIPTION In the discussion that follows, techniques for channel estimation will be described in the context of downlink channel estimation, in a wireless system employing FDD, i.e., where the base stations transmit to user equipment (UEs) in one frequency band while receiving from the UEs in another. Thus, references to the “downlink” refer to transmissions from the base station to the UE, while references to the “uplink” refer to transmissions from the UE to the base station. The description below also describes these techniques in the context of massive-MIMO, where the base station employs a relatively large number (perhaps tens or even hundreds) of antenna elements for transmitting to and receiving from the UEs it serves. In such systems, the UEs may have only one or a relatively small number of antennas. However, the present techniques are not limited to estimating downlink channels in FDD systems, nor are they limited to use in massive MIMO systems. Accordingly, these techniques should be understood as more generally applicable to estimating a channel between first and second devices, where at least one of the devices (and possibly both) uses multiple (and perhaps many) antenna elements for transmitting to and receiving from the other. The techniques described herein are based on the fact that uplink and downlink channel covariance matrices are functions of the same physical propagation environment, even if the resulting channels are different, due to the different frequencies used for each channel. This implies that there exists a mapping function between the uplink and downlink channel covariance matrices. Using this fact, learning this mapping function may be done using a deep neural network architecture. To do so, the covariance matrices are treated as 2D images, where a machine-learning generator (e.g., U-Net) is trained to learn the important features of these images. More specifically, the generator receives the uplink channel covariance matrix as an input and extracts some features that represent the radio frequency (RF) signature of the propagation environment. It then transforms these features to the downlink channel covariance matrix in a different frequency (downlink) band. This strategy ensures that the burden in FDD downlink channel estimation is shifted from the UE to the base station (BS), which is likely already responsible for uplink covariance matrix estimation. Furthermore, it is the BS that requires the downlink covariance matrix for downlink beamforming. Unlike previous studies, there is no sparsity assumption in the underlying channel parameters. Instead, an a priori known unitary transformation is used to transform the uplink and downlink channel covariance matrices into a virtual domain. The covariance matrices in this domain are represented using far fewer spatial features, while preserving the same information as in the original domain. This unitary transformation not only provides faster convergence in training the generator but also improves the generalization capability of the generator without needing sophisticated architectures. The techniques described herein include the following features: • Learning the mapping between the uplink and downlink covariance matrices: A deep neural network architecture is used to learn the mapping function between uplink and downlink. The mapping function is characterized by a U-Net, for example, which is trained adversarially with the help of another neural network. Once trained, the generator serves as a mapping function that maps the uplink channel covariance matrix to its downlink counterpart at different frequency band. Such a capability eliminates the real-time channel training and feedback overhead for the downlink in FDD systems. • Virtual domain representation of the channel covariance matrices: The uplink- downlink mapping is performed in a virtual domain. First, the uplink channel covariance matrix is transformed to a virtual domain using a priori-known unitary matrices. The unitary matrices are frequency-independent and are functions of only the antenna array geometry at the BS. Then, the neural network, e.g., U-Net, maps the virtual-domain representation of the uplink channel covariance matrix to the virtual domain of its downlink counterpart. The virtual domain representation provides faster convergence in training the U-Net as well as an enhancement in its mapping capability. The advantages of this approach are twofold. First, the proposed solution directly maps the uplink channel covariance matrix to its downlink counterpart. Such a capability eliminates the need for real-time downlink channel training and feedback overhead in FDD massive MIMO systems. Second, to enhance the uplink and downlink mapping capability, the covariance matrices are represented in a virtual domain using a matrix transformation. The transformation matrices are unitary, frequency-independent and are functions of only the BS antenna geometry. These two advantages translate to an increase in downlink throughput and improved signal quality. System model To explain the techniques in further detail, a single-cell single-user communication system is considered. The BS is equipped with an antenna array with M » 1 antenna elements. For the sake of simplicity of presentation, it is assumed that the UE has only a single antenna. Nevertheless, the proposed algorithm can be directly extended to the case where the UE is equipped with multiple antennas either by assuming a common set of propagation parameters, i.e., path gains, delays, and angle-of-arrival (AoA) and angle-of- departure (AoD), for all the channels to different UE antennas or by assuming a distinct set of propagation parameters for each UE antenna or a combination thereof. The communication between the BS and UE is performed in FDD mode. In the uplink, the UE communicates with the BS at frequency , while in the downlink the BS communicates with the UE at frequency ^^Both uplink and downlink frequency bands are of bandwidth^^, in this model. Received signal model It is assumed that orthogonal frequency division duplex (OFDM) technology is used in both uplink and downlink commutation with K subcarriers. Note that this is not necessary to apply the techniques described herein – the assumption is used here to define the system model in terms of a commonly used modulation scheme. During the uplink training at the k-th subcarrier, the UE transmit training symbol s k , where we assume , and P T is the transmit power. The received signal at the BS over all subcarrier is given by where is the uplink received signal at the k-th subcarrier, is an M × 1 uplink channel vector at the k-th subcarrier, and n k ,is an M × 1 receiver noise vector at the k-th subcarrier. Note that the entries of n k are identically and independently distributed (i.i.d.) Gaussian random variables with zero mean and variance Channel model In this section, for simplicity in notation, the super script "up" is dropped. A geometry- based stochastic multi-path channel model that considers 3D propagation and polarization effects is used. Let denote the l-th cluster time-domain channel vector, the frequency domain channel vector at subcarrier k can be expressed as follows where 2 is number of channel clusters in the propagation environment. Note that according to Ref.15, for the cluster delay line (CDL) channel model, each cluster is composed of many rays. Depending on the type of channel, the large scale parameters as well as the value of 2 are specified in Tables 7.7.1-1 to 7.7.1-5 of Ref.15. In the modeling described herein, CDL-A is used, with a setting of L = 23. Finally, the spatial channel correlation matrix is given by where R k is the frequency domain covariance matrix at subcarrier k. Virtual domain representation Let denote the frequency-domain ^ ! ^^channel matrix. The virtual representation may be written as (3) where channel matrix in the virtual domain, and discrete Fourier transform (DFT) matrix whose k-th column is given by The matr unitary matrix. It is selected as a beam-space basis matrix based on the array geometry. For example, for a linear array, the P-th column of the -dimensional arized array wher enote the number of rows and columns of th matrix containing the basis of the 2D-SDFT beam- space transformation, , is given by where denotes the identity matrix, denotes the Kronecker product operator, and T matrices, i.e., the element of is given by Denotin e can write, ( 4) Note that the last equality implies that R and R v are unitarily equivalent, i.e., the eigenvectors of R and R v can be obtained from each other using the unitary matrix However, using the above decomposition, has a much simpler structure compar Uplink-downlink mapping Here, the aim is to find a function that maps the uplink channel covariance matrix to its downlink counterpart. The mapping is performed in virtual domain since the covariance matrices in this domain has a much simpler structure. Specifically, the goal is to find the mapping functio generates the downlink channel covariance matrix by feeding it with the uplink channel covariance matrix. Characterization of mapping function bF^ G A CNN-based architecture, such as U-net, may be used to represe The U-Net architecture is illustrated in Figure 2. The architecture has a “U” shape, hence the name U- Net. It is symmetric and consists of two major parts. The left part is contraction path which aims to down-sample the input image, in this case the normalized The need for normalization is explained further, below. The contraction path consists of a repeated application of two 2 × 2 convolutions (unpadded), each followed by a batch normalization operation and a non-linear activation function and a 2 × 2 max pooling operation with stride 2 for down-sampling. The non-linear activation function may be a rectified linear unit (ReLU) or a Leaky ReLU, for example. At each down-sampling step the number of features is doubled and the dimension of input image (matrix) is reduced by half. The right part is the expansion path, which is constituted by transposed 2D convolutional layers. The expansion path aims to up-sample the low-dimensional features at the bottom of the U in order to generate the corresponding channel covariance matrix in the downlink, denoted Every step in the expansive path consists of an upsampling of the feature map followed by a -convolution” that halves the number of features, a concatenation with the correspondingly cropped feature map from the contracting path, and two 2 × 2 convolutions, each followed by a non-linear activation function. Cropping may not be needed, depending on the size of the matrices. At the final layer, a 1 × 1 convolution is used to map each component of feature vector to the desired number of output image Training of U-Net The U-Net is trained using a generative adversarial network (GAN), inspired by the pix2pix architecture proposed in Ref.17. As shown in Figure 3 the training process involves two neural networks, namely a generator network in this example, and a discriminator networ epresent, respectively, the sets of weights of the generator and discriminator networks. Let^j,^k, and kl represent the uplink, the corresponding downlink, and the generated downlink channel covariance matrices ( respectively. It is assumed that the pairs mj^^ k^n^are available. The generator takes j^and generates a plausible^k. Meanwhile, the discriminator is provided with both^k and kl^and must determine whether the sample, kl is a plausible transformation of the source^j. In particular, the discriminator tries to correctly classify k as real (from the dataset) or fake (from what the U-Net generates). The discriminator network is updated directly, whereas the generator is updated via the discriminator. Accordingly, the two networks are trained simultaneously in an adversarial process where the U-Net seeks to better fool the discriminator and the discriminator seeks to better classify the true k from the generated e trained simultaneously and iteratively via the following two-player min-max game and^u is the regularization term. Note that the objective of training by generating the realistic downlink channel covariance matrices such that f s a high probability to being true samples. This can be done by maximizing (or equivalently, minimizing , as given in the second term in equation (7). is also updated via the L1 loss measured between the generated sample (i.e and the expected output sample^k. Using an L1 loss encourages the generator to create plausible translations of^j. In the meantime ms to correctly distinguish the generated samples rom the true samples^k. In other words, s meant to be close to zero while has to be close to one. Therefore, is chosen such tha maximized. Such a competitive interplay between nverges to an equilibrium where the generator pr oduces realistic samples such that the discriminator is unable to differentiate betwee Simulations The performance of the techniques described above has been simulated for a single-cell FDD communication system. The single user uplink and downlink frequencies are separated by The UE and BS communicate over K OFDM subcarriers in both uplink and downlink. The simulation parameters are provided in Table 1. To generate the training dataset, the uplink channel is first generated according to the model described above, based on the 3GPP model. (See Ref.15.) The MATLAB 5G Toolbox was used to generate the uplink channel. The MATLAB channel generation parameters are given in Table 2. Table 3 lists BS structure (BS-struct) and UE structure (UE-struct) parameters. The BS-struct and UE-struct parameter conventions are as follows: S ows, columns, polarization, array panel size. ElementSpacing antenna spacing, panel spacing. Orientation:^8^^ ^9, mechanical orientation of antenna. Element: '38.901' and 'isotropic', Antenna radiation pattern in TR38.901. PolarizationModel: ’Model-1’ and ’Model-2’ Radiation field pattern. Table 1 Simulation parameters Table 2 3GPP TR 38.901 Channel model parameters Table 3 For each uplink channel, to generate the corresponding downlink channel pair, the carrier frequency is changed to and the arrival and departure parameters are swapped. Next, using equation (3), the frequency response of the channel over all ^^subcarriers is calculated. Finally, for each channel realization calculated using the unitary transformation given in equation (5). To improve the training process, the data may be normalized as follows: For each uplink- downlink pair, the entries e normalized to be in the range8^^^9. Specificall s subtracted from each entry of the o-th training pairs, denoted as nd then the result is divided by respectively, denote the minimum and maximum absolute values of entries of ata normalization is among the best practices for training a neural network. Normalizing the data generally speeds up learning algorithm and leads to faster convergence. To form the training, validation and testing dataset, after normalizing the data, the data is first shuffled and then split the data such that 70%is dedicated for training and 15% each for validation and testing. Network architecture The U-Net architecture is as follows. The contraction path consists of three feature extraction blocks, each has two convolutional layers to increase the features of the input matrix (image) and a 2 × 2 max-pooling operation between each block to reduce the dimension of each feature matrix by half. As shown in Figure 2, in the first block the features are increased to 16 in the first layer and then to 64 in the second layer, each followed by a batch normalization and a non-linear activation function. In the second block, the number of features is increased to 128 and256, respectively. In the third block, the number of features is first increased to 512 and followed by another convolutional operation to have 512 output feature matrices. The expansion path consists of two convolution blocks with a2 × 2 up-sampling operation between each block. In the first block, the input is first concatenated with the output of the corresponding block in the contraction path followed by two set of convolution operation to reduce the number of features to 128 and64, respectively. A similar operation is performed in the last block to reduce the number features to 64 and16, respectively. Finally, a convolution filter is used to map the 16 feature of the last layer into two features. The discriminator network is trained in a supervised manner and is aimed to correctly classify the true and generated (fake) downlink channel correlation matrix. In particular, To do so, the discriminator takes either the pa its input. It then passes through a series of convolution layers with the following number of output features . It is then flattened and passed through a sigmoid function to generate the true/fake scores. In all convolutional operations, we use padding = 0 and stride =2. Also, to improve the stability and convergence during the training, in all the networks, the Leaky ReLU activation function is used, with a slope of0.2. During the training, the aim is to fi This can be done by jointly and iteratively minimizing and maximizing the ob jective function in (6) with respec respectively. Before the training is begun nd are randomly initialized. In each of the subsequent iterations, a mini-batch of size P is sampled from the training set of pai or fixe s first updated by descending in the opposite direction of the gradient of the loss function in (6). Then, while fixing s updated by ascending in the direction of the gradient of of the loss function in (6). Note that, in these steps the sample mean is used instead of mathematical expectation. Furthermore, thanks to he gradient descent/ascend in each epoch is carried out by the differentiation capability implemented in Pytorch. In each iteration re updated using any standard gradient-based update. Here, to speed up the convergence, a momentum-based gradient update such as ADAM is used. (See Ref.18.) The training parameters are specified in Table 4. Table 4 Training parameters Simulation scenarios The following simulation scenarios have been considered: • Fully connected: In this scenario, supervised training was used to find the mapping function between th nd the principal eigenvector o using a fully connected neural network. In this architecture, there is n o convolutional layer. To do so was first flattened as a long vector. It was then passed through a series of fully connected layers. In the last layer, the network generated the principal eigenvector o The loss function in this scenario is where and are the p rincipal eigenvectors resp ectively. • U-Net: In this scenario, unlike the fully-connected scenario, the principal eigenvector calculation was separated from the learning process. Specifically, the mapping function betwee was found, using the U-Net architecture given described above. The loss function in this scenario is given b • Proposed technique: The same mapping function as in the U-Net scenario was used. However, the U-Net is trained adversarily, using a discriminator network. The proposed technique was evaluated in two cases. In the first case, the pairs were used to find the mapping function in the virtual domain. In the second case, the pairs were used to find the mapping function in the original domain, withou t the unitary transformation. • Upper and lower bound scenarios: As the benchmark scenarios, the perfect CSI case was used, where it was assumed that the principal eigenvector in downlink is perfectly known. This served as an upper-bound in the simulations. As a lower bound, the downlink principal eigenvector was not estimated; instead ^ ^^ , the principal eigenvector obtained from the uplink channel covariance matrix, was used. Simulation Results Figure 4 is a plot of the distribution of normalized mean squared error (NMSE) of for the first simulation. The proposed technique yields a better NMSE for small values o This is mainly related to the fact tha and re highly correlated at sma , leading to a better generalizability of the mapping function. Figure 5 shows the quality of the estimation of the downlink principal eigenvector . This figure plots^  the cosine of the angle between e aligned, the cosine of the angles between them becomes ^^. The closer the cosine to1, the better the estimate of As shown, the U-Net performs the same as the fully-connected network for sma while providing a better performance at large and are high ly correlated and both the U-Net and fully-connected yield the same per formance. A is increased become less correlated, and therefore, it becomes a challenging task for t he fully-connected structure to perform the mapping. The U-Net, however, leveraging the convolutional structure, performs far better in extracting the useful features of n order to reconstruct the corresponding downlink covariance matrix. On the other hand, the technique outperforms the U-Net scenario in all ranges of^^^. The reason is that the MSE loss when using the U-Net structure does not capture the spatial features in the downlink covariance matrix. Unlike the U-Net scenario, a GAN structure is used to adversarially train the generator along with an L1 loss given in equation (8) to improve its mapping capabilities for both small and large^^^. In a different comparison, the proposed generator was trained using the pairs i.e., the original domain without the unitary transformation given in (5). As shown, the training data in virtual domain yields a far better performance compared to that in original domain. The reason is that, given the unitary transformation in (5), the covariance matrices in virtual domain are represented using a fewer spatial features, while preserving the same information as in the original domain. Using such a simpler structure provides enhancement in training o oth in terms of convergence and generalizability. In Figure 6, the corresponding downlink rate is plotted vs^^^. Here, the downlink precoding vector, denoted as^¡, is chosen as the downlink principal eigenvector in each scenario. As for the upper and lower bounds, we us corresponding to perfect CSI and no estimation scenarios, respectively. Th e proposed technique, due to a better estimate of the downlink principal eigenvector, outperforms the U-Net and fully-connected structure. As noted above, the advantages of the proposed solution are twofold: 1) the proposed solution directly maps the uplink channel covariance matrix to its downlink counterpart. Such a capability eliminates the need for real-time downlink channel training and feedback overhead in FDD massive MIMO systems.2) To enhance the uplink and downlink mapping capability, we represent the covariance matrices in virtual domain using a matrix transformation. The transformation matrices are unitary, frequency-independent and are only functions of the BS antenna geometry. These two advantages translate to an increase in downlink throughput and improved signal quality In view of the detailed examples and explanation provide above, it will be appreciated that the process flow diagram shown in Figure 7 illustrates an example method for or estimating channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element. The first device may be a base station, while the second device may be a UE, for example. Note that the term “antenna element” may refer to a single discrete antenna, or to a combination of antenna structures that are operated as a unit. Note also that while the second device need only have one antenna element, it may have several (or many), in some embodiments of the illustrated technique. Also note that the illustrated method is intended to be a generalization of and encompass several of the techniques described above. Thus, where the terminology used below to describe the method differs somewhat from similar or corresponding terminology used in the discussion above, the former should be understood to at least encompass the latter, unless the context clearly indicates otherwise. As shown at block 720 of Figure 7, the illustrated method comprises the step of estimating a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band. This first direction may be the uplink, for example, between a UE and a BS. As shown at block 730, the method further comprises transforming this first channel covariance matrix to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry. As noted above, this unitary matrix is independent of carrier frequency. As shown at block 740, the method further comprises mapping the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a mapping function that estimates a mapping of virtual domain channel covariance in the first direction to virtual domain channel covariance in a second direction, from the first device towards the second device. Next, as shown at block 750, the method comprises transforming the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band. This second transformation may use the same, frequency-independent, unitary matrix discussed above. This second direction may be the downlink, for example. In some embodiments of the illustrated method, the mapping function is based on a deep neural network, such as a CNN. This mapping function may be based on the generator of a generative adversarial network, GAN, for example. The mapping function may be on the U-net architecture, in some specific examples. The second channel covariance matrix represents the radio channel in the direction from the first device to the second device. Thus, the method may, in some embodiments or instances, comprise the step of transmitting a signal to the second device from the first device, in the second frequency band, using antenna weights determined from the second channel covariance matrix. This is shown at block 760. As described in detail above, the mapping function may be trained using machine-learning techniques. Thus, some embodiments or instances of the illustrated method may comprise, prior to the transforming steps and mapping step, training the mapping function using a deep generative model, multiple estimates of the channel covariance in the first direction, and multiple estimates of the channel covariance in the second direction. This is shown at block 710. This training may comprise using a GAN, for example, where the mapping function corresponds to the generator network of the GAN. The training may be performed in a different node from that carrying out the other steps of Figure 7, in some embodiments, and need not be carried out for every instance of the method. Figure 8 shows an example of a communication system 800 in which the techniques above may be employed, in accordance with some embodiments. In the example, the communication system 800 includes a telecommunication network 802 that includes an access network 804, such as a radio access network (RAN), and a core network 806, which includes one or more core network nodes 808. The access network 804 includes one or more access network nodes, such as network nodes 810a and 810b (one or more of which may be generally referred to as network nodes 810), or any other similar 3rd Generation Partnership Project (3GPP) access node or non-3GPP access point. The network nodes 810 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 812a, 812b, 812c, and 812d (one or more of which may be generally referred to as UEs 812) to the core network 806 over one or more wireless connections. Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, the communication system 800 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. The communication system 800 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system. The UEs 812 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 810 and other communication devices. Similarly, the network nodes 810 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 812 and/or with other network nodes or equipment in the telecommunication network 802 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 802. In the depicted example, the core network 806 connects the network nodes 810 to one or more hosts, such as host 816. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts. The core network 806 includes one more core network nodes (e.g., core network node 808) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 808. Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF). The host 816 may be under the ownership or control of a service provider other than an operator or provider of the access network 804 and/or the telecommunication network 802, and may be operated by the service provider or on behalf of the service provider. The host 816 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server. As a whole, the communication system 800 of Figure 8 enables connectivity between the UEs, network nodes, and hosts. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox. In some examples, the telecommunication network 802 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 802 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 802. For example, the telecommunications network 802 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive IoT services to yet further UEs. In some examples, the UEs 812 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to the access network 804 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 804. Additionally, a UE may be configured for operating in single- or multi-RAT or multi- standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio – Dual Connectivity (EN-DC). In the example, the hub 814 communicates with the access network 804 to facilitate indirect communication between one or more UEs (e.g., UE 812c and/or 812d) and network nodes (e.g., network node 810b). In some examples, the hub 814 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs. For example, the hub 814 may be a broadband router enabling access to the core network 806 for the UEs. As another example, the hub 814 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 810, or by executable code, script, process, or other instructions in the hub 814. As another example, the hub 814 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data. As another example, the hub 814 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 814 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 814 then provides to the UE either directly, after performing local processing, and/or after adding additional local content. In still another example, the hub 814 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy IoT devices. The hub 814 may have a constant/persistent or intermittent connection to the network node 810b. The hub 814 may also allow for a different communication scheme and/or schedule between the hub 814 and UEs (e.g., UE 812c and/or 812d), and between the hub 814 and the core network 806. In other examples, the hub 814 is connected to the core network 806 and/or one or more UEs via a wired connection. Moreover, the hub 814 may be configured to connect to an M2M service provider over the access network 804 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with the network nodes 810 while still connected via the hub 814 via a wired or wireless connection. In some embodiments, the hub 814 may be a dedicated hub – that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 810b. In other embodiments, the hub 814 may be a non-dedicated hub – that is, a device which is capable of operating to route communications between the UEs and network node 810b, but which is additionally capable of operating as a communication start and/or end point for certain data channels. Figure 9 shows a UE 900 in accordance with some embodiments. As used herein, a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs. Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc. Other examples include any UE identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-IoT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE. A UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle-to-everything (V2X). In other examples, a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter). The UE 900 includes processing circuitry 902 that is operatively coupled via a bus 904 to an input/output interface 906, a power source 908, a memory 910, a communication interface 912, and/or any other component, or any combination thereof. Certain UEs may utilize all or a subset of the components shown in Figure 9. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc. The processing circuitry 902 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 910. The processing circuitry 902 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry 902 may include multiple central processing units (CPUs). In the example, the input/output interface 906 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices. Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. An input device may allow a user to capture information into the UE 900. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof. An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device. In some embodiments, the power source 908 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. The power source 908 may further include power circuitry for delivering power from the power source 908 itself, and/or an external power source, to the various parts of the UE 900 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 908. Power circuitry may perform any formatting, converting, or other modification to the power from the power source 908 to make the power suitable for the respective components of the UE 900 to which power is supplied. The memory 910 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth. In one example, the memory 910 includes one or more application programs 914, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 916. The memory 910 may store, for use by the UE 900, any of a variety of various operating systems or combinations of operating systems. The memory 910 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD- DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof. The UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’ The memory 910 may allow the UE 900 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 910, which may be or comprise a device-readable storage medium. The processing circuitry 902 may be configured to communicate with an access network or other network using the communication interface 912. The communication interface 912 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 922. The communication interface 912 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network). Each transceiver may include a transmitter 918 and/or a receiver 920 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth). Moreover, the transmitter 918 and receiver 920 may be coupled to one or more antennas (e.g., antenna 922) and may share circuit components, software or firmware, or alternatively be implemented separately. In the illustrated embodiment, communication functions of the communication interface 912 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth. Regardless of the type of sensor, a UE may provide an output of data captured by its sensors, through its communication interface 912, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient). As another example, a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change. For example, the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input. A UE, when in the form of an Internet of Things (IoT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare. Non-limiting examples of such an IoT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot. A UE in the form of an IoT device comprises circuitry and/or software in dependence of the intended application of the IoT device in addition to other components as described in relation to the UE 900 shown in Figure 9. As yet another specific example, in an IoT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node. The UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the UE may implement the 3GPP NB-IoT standard. In other scenarios, a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. In practice, any number of UEs may be used together with respect to a single use case. For example, a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone. When the user makes changes from the remote controller, the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed. The first and/or the second UE can also include more than one of the functionalities described above. For example, a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators. Figure 10 shows a network node 1000 in accordance with some embodiments. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Other examples of network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi- cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs). The network node 1000 includes a processing circuitry 1002, a memory 1004, a communication interface 1006, and a power source 1008. The network node 1000 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the network node 1000 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, the network node 1000 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memory 1004 for different RATs) and some components may be reused (e.g., a same antenna 1010 may be shared by different RATs). The network node 1000 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1000, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1000. The processing circuitry 1002 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1000 components, such as the memory 1004, to provide network node 1000 functionality. In some embodiments, the processing circuitry 1002 includes a system on a chip (SOC). In some embodiments, the processing circuitry 1002 includes one or more of radio frequency (RF) transceiver circuitry 1012 and baseband processing circuitry 1014. In some embodiments, the radio frequency (RF) transceiver circuitry 1012 and the baseband processing circuitry 1014 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1012 and baseband processing circuitry 1014 may be on the same chip or set of chips, boards, or units. The memory 1004 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read- only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1002. The memory 1004 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1002 and utilized by the network node 1000. The memory 1004 may be used to store any calculations made by the processing circuitry 1002 and/or any data received via the communication interface 1006. In some embodiments, the processing circuitry 1002 and memory 1004 is integrated. The communication interface 1006 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 1006 comprises port(s)/terminal(s) 1016 to send and receive data, for example to and from a network over a wired connection. The communication interface 1006 also includes radio front-end circuitry 1018 that may be coupled to, or in certain embodiments a part of, the antenna 1010. Radio front-end circuitry 1018 comprises filters 1020 and amplifiers 1022. The radio front-end circuitry 1018 may be connected to an antenna 1010 and processing circuitry 1002. The radio front-end circuitry may be configured to condition signals communicated between antenna 1010 and processing circuitry 1002. The radio front-end circuitry 1018 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. The radio front-end circuitry 1018 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1020 and/or amplifiers 1022. The radio signal may then be transmitted via the antenna 1010. Similarly, when receiving data, the antenna 1010 may collect radio signals which are then converted into digital data by the radio front-end circuitry 1018. The digital data may be passed to the processing circuitry 1002. In other embodiments, the communication interface may comprise different components and/or different combinations of components. In certain alternative embodiments, the network node 1000 does not include separate radio front-end circuitry 1018, instead, the processing circuitry 1002 includes radio front- end circuitry and is connected to the antenna 1010. Similarly, in some embodiments, all or some of the RF transceiver circuitry 1012 is part of the communication interface 1006. In still other embodiments, the communication interface 1006 includes one or more ports or terminals 1016, the radio front-end circuitry 1018, and the RF transceiver circuitry 1012, as part of a radio unit (not shown), and the communication interface 1006 communicates with the baseband processing circuitry 1014, which is part of a digital unit (not shown). The antenna 1010 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. The antenna 1010 may be coupled to the radio front- end circuitry 1018 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, the antenna 1010 is separate from the network node 1000 and connectable to the network node 1000 through an interface or port. The antenna 1010, communication interface 1006, and/or the processing circuitry 1002 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 1010, the communication interface 1006, and/or the processing circuitry 1002 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment. The power source 1008 provides power to the various components of network node 1000 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). The power source 1008 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1000 with power for performing the functionality described herein. For example, the network node 1000 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1008. As a further example, the power source 1008 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail. Embodiments of the network node 1000 may include additional components beyond those shown in Figure 10 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the network node 1000 may include user interface equipment to allow input of information into the network node 1000 and to allow output of information from the network node 1000. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1000. Figure 11 is a block diagram of a host 1100, which may be an embodiment of the host 816 of Figure 8, in accordance with various aspects described herein. As used herein, the host 1100 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm. The host 1100 may provide one or more services to one or more UEs. The host 1100 includes processing circuitry 1102 that is operatively coupled via a bus 1104 to an input/output interface 1106, a network interface 1108, a power source 1110, and a memory 1112. Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 9 and 10, such that the descriptions thereof are generally applicable to the corresponding components of host 1100. The memory 1112 may include one or more computer programs including one or more host application programs 1114 and data 1116, which may include user data, e.g., data generated by a UE for the host 1100 or data generated by the host 1100 for a UE. Embodiments of the host 1100 may utilize only a subset or all of the components shown. The host application programs 1114 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems). The host application programs 1114 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, the host 1100 may select and/or indicate a different host for over-the-top services for a UE. The host application programs 1114 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc. Figure 12 is a block diagram illustrating a virtualization environment 1200 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1200 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host. Further, in embodiments in which the virtual node does not require radio connectivity (e.g., a core network node or host), then the node may be entirely virtualized. Applications 1202 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Hardware 1204 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1206 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1208a and 1208b (one or more of which may be generally referred to as VMs 1208), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. The virtualization layer 1206 may present a virtual operating platform that appears like networking hardware to the VMs 1208. The VMs 1208 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1206. Different embodiments of the instance of a virtual appliance 1202 may be implemented on one or more of VMs 1208, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment. In the context of NFV, a VM 1208 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 1208, and that part of hardware 1204 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 1208 on top of the hardware 1204 and corresponds to the application 1202. Hardware 1204 may be implemented in a standalone network node with generic or specific components. Hardware 1204 may implement some functions via virtualization. Alternatively, hardware 1204 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1210, which, among others, oversees lifecycle management of applications 1202. In some embodiments, hardware 1204 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 1212 which may alternatively be used for communication between hardware nodes and radio units. Figure 13 shows a communication diagram of a host 1302 communicating via a network node 1304 with a UE 1306 over a partially wireless connection in accordance with some embodiments. Example implementations, in accordance with various embodiments, of the UE (such as a UE 812a of Figure 8 and/or UE 900 of Figure 9), network node (such as network node 810a of Figure 8 and/or network node 1000 of Figure 10), and host (such as host 816 of Figure 8 and/or host 1100 of Figure 11) discussed in the preceding paragraphs will now be described with reference to Figure 13. Like host 1100, embodiments of host 1302 include hardware, such as a communication interface, processing circuitry, and memory. The host 1302 also includes software, which is stored in or accessible by the host 1302 and executable by the processing circuitry. The software includes a host application that may be operable to provide a service to a remote user, such as the UE 1306 connecting via an over-the-top (OTT) connection 1350 extending between the UE 1306 and host 1302. In providing the service to the remote user, a host application may provide user data which is transmitted using the OTT connection 1350. The network node 1304 includes hardware enabling it to communicate with the host 1302 and UE 1306. The connection 1360 may be direct or pass through a core network (like core network 806 of Figure 8) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks. For example, an intermediate network may be a backbone network or the Internet. The UE 1306 includes hardware and software, which is stored in or accessible by UE 1306 and executable by the UE’s processing circuitry. The software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1306 with the support of the host 1302. In the host 1302, an executing host application may communicate with the executing client application via the OTT connection 1350 terminating at the UE 1306 and host 1302. In providing the service to the user, the UE's client application may receive request data from the host's host application and provide user data in response to the request data. The OTT connection 1350 may transfer both the request data and the user data. The UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT connection 1350. The OTT connection 1350 may extend via a connection 1360 between the host 1302 and the network node 1304 and via a wireless connection 1370 between the network node 1304 and the UE 1306 to provide the connection between the host 1302 and the UE 1306. The connection 1360 and wireless connection 1370, over which the OTT connection 1350 may be provided, have been drawn abstractly to illustrate the communication between the host 1302 and the UE 1306 via the network node 1304, without explicit reference to any intermediary devices and the precise routing of messages via these devices. As an example of transmitting data via the OTT connection 1350, in step 1308, the host 1302 provides user data, which may be performed by executing a host application. In some embodiments, the user data is associated with a particular human user interacting with the UE 1306. In other embodiments, the user data is associated with a UE 1306 that shares data with the host 1302 without explicit human interaction. In step 1310, the host 1302 initiates a transmission carrying the user data towards the UE 1306. The host 1302 may initiate the transmission responsive to a request transmitted by the UE 1306. The request may be caused by human interaction with the UE 1306 or by operation of the client application executing on the UE 1306. The transmission may pass via the network node 1304, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 1312, the network node 1304 transmits to the UE 1306 the user data that was carried in the transmission that the host 1302 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 1314, the UE 1306 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 1306 associated with the host application executed by the host 1302. In some examples, the UE 1306 executes a client application which provides user data to the host 1302. The user data may be provided in reaction or response to the data received from the host 1302. Accordingly, in step 1316, the UE 1306 may provide user data, which may be performed by executing the client application. In providing the user data, the client application may further consider user input received from the user via an input/output interface of the UE 1306. Regardless of the specific manner in which the user data was provided, the UE 1306 initiates, in step 1318, transmission of the user data towards the host 1302 via the network node 1304. In step 1320, in accordance with the teachings of the embodiments described throughout this disclosure, the network node 1304 receives user data from the UE 1306 and initiates transmission of the received user data towards the host 1302. In step 1322, the host 1302 receives the user data carried in the transmission initiated by the UE 1306. One or more of the various embodiments improve the performance of OTT services provided to the UE 1306 using the OTT connection 1350, in which the wireless connection 1370 forms the last segment. More precisely, the teachings of these embodiments may improve downlink channel estimates in the context of a FDD wireless system and thereby provide benefits such as improved data throughput and increased connection reliability. In an example scenario, factory status information may be collected and analyzed by the host 1302. As another example, the host 1302 may process audio and video data which may have been retrieved from a UE for use in creating maps. As another example, the host 1302 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights). As another example, the host 1302 may store surveillance video uploaded by a UE. As another example, the host 1302 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs. As other examples, the host 1302 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data. In some examples, a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection 1350 between the host 1302 and UE 1306, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 1302 and/or UE 1306. In some embodiments, sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 1350 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities. The reconfiguring of the OTT connection 1350 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 1304. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 1302. The measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 1350 while monitoring propagation times, errors, etc. Although the computing devices described herein (e.g., UEs, network nodes, hosts) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware. In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer-readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally. The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. The term unit, as used herein, can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. As described herein, device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor. Furthermore, functionality of a device or apparatus can be implemented by any combination of hardware and software. A device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other. Moreover, devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. In addition, certain terms used in the present disclosure, including the specification, drawings and exemplary embodiments thereof, can be used synonymously in certain instances, including, but not limited to, e.g., data and information. It should be understood that while these words and/or other words that can be synonymous to one another can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties. EXAMPLE EMBODIMENTS Embodiments of the techniques, apparatuses, and systems described above include, but are not limited to, the following enumerated examples: 1. A method for estimating channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element, the method comprising: estimating a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band; transforming the first channel covariance matrix to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry; mapping the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a mapping function that estimates a mapping of virtual domain channel covariance in the first direction to virtual domain channel covariance in a second direction, from the first device towards the second device; transforming the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band. 2. The method of example embodiment 1, wherein the mapping function is based on a deep neural network. 3. The method of example embodiment 1 or 2, wherein the mapping function is based on the generator of a generative adversarial network, GAN. 4. The method of example embodiment 3, wherein the mapping function is based on the U- net architecture. 5. The method of any one of example embodiments 1-4, further comprising transmitting a signal to the second device from the first device, in the second frequency band, using antenna weights determined from the second channel covariance matrix. 6. The method of any one of example embodiment 1-5, wherein the method comprises, prior to said transforming steps and mapping step: training the mapping function using a deep generative model, multiple estimates of the channel covariance in the first direction, and multiple estimates of the channel covariance in the second direction. 7. The method of example embodiment 6, wherein said training comprises using a generative adversarial network, GAN, the mapping function corresponding to the generator network of the GAN. 8. An apparatus for estimating channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element, the apparatus comprising processing circuitry configured to: estimate a first channel covariance matrix based on training symbols transmitted in a first direction, from the second device towards the first device, in a first frequency band; transform the first channel covariance matrix to a first virtual domain covariance matrix, using a unitary matrix that is a function of the antenna geometry; map the first virtual domain covariance matrix to a second virtual domain covariance matrix, using a mapping function that estimates a mapping of virtual domain channel covariance in the first direction to virtual domain channel covariance in a second direction, from the first device towards the second device; transform the second virtual domain covariance matrix to a second channel covariance matrix representative of the radio channel in the second direction, in a second frequency band. 9. The apparatus of example embodiment 8, wherein the mapping function is based on a deep neural network. 10. The apparatus of example embodiment 8 or 9, wherein the mapping function is based on the generator of a generative adversarial network, GAN. 11. The apparatus of example embodiment 10, wherein the mapping function is based on the U-net architecture. 12. The apparatus of any one of example embodiments 8-11, wherein the processing circuitry is further configured to transmit a signal to the second device from the first device, in the second frequency band, via the plurality of antennas, using antenna weights determined from the second channel covariance matrix. 13. The apparatus of any one of example embodiment 8-12, wherein the processing circuitry is further configured to, prior to said transforming and mapping: train the mapping function using a deep generative model, multiple estimates of the channel covariance in the first direction, and multiple estimates of the channel covariance in the second direction. 14. The apparatus of example embodiment 13, wherein the processing circuitry is configured to perform the training using a generative adversarial network, GAN, wherein the mapping function corresponding to the generator network of the GAN. 15. The apparatus of example embodiment 13 or 14, wherein the apparatus comprises a base station comprising first processing circuitry configured to carry out said estimating, transforming, and mapping operations, and a second node comprising second processing circuitry configured to carry out said training. 16. A computer program product comprising program instructions for execution by processing circuitry, the program instructions being configured to cause the processing circuitry to estimate channel characteristics for a channel between a first device having a plurality of antenna elements arranged according to an antenna geometry and a second device having at least one antenna element by: estimating a first set of one or more parameters for the channel, based on training symbols transmitted from the second device to the first device; estimating a second set of one or more parameters for the channel, based on training symbols or reference signals transmitted from the first device to the second device and based on the first set of one or more parameters. 17. A computer-readable medium comprising, stored thereupon, the computer program product of example embodiment 16. REFERENCES 1. E. Bjrnson, E. G. Larsson, and T. L. Marzetta, “Massive MIMO: ten myths and one critical question,” IEEE Commun. Mag., vol.54, no.2, pp.114–123, 2016. 2. F. Rusek, D. Persson, B. K. Lau, E. G. Larsson, T. L. Marzetta, O. Edfors, and F. Tufvesson, “Scaling up MIMO: Opportunities and challenges with very large arrays,” IEEE Signal Process. Mag., vol.30, no.1, pp.40–60, Jan.2013. 3. L. Miretti, R. L. Cavalcante, and S. Stanczak, “FDD massive MIMO channel spatial covariance conversion using projection methods,” in IEEE Int. Conf. Acoustics, Speech and Signal Process. (ICASSP), Calgary, AB, Canada, Apr.2018, pp.3609– 3613. 4. M. Jordan, A. Dimofte, X. Gong, and G. Ascheid, “Conversion from uplink to downlink spatio-temporal correlation with cubic splines,” in IEEE 69th Vehicular Tech. Conf., Barcelona, Spain, Apr.2009, pp.1–5. 5. S. Haghighatshoar, M. B. Khalilsarai, and G. Caire, “Multi-band covariance interpolation with applications in massive MIMO,” in IEEE Int. Symp. Inf. Theory (ISIT), Vail, CO, USA, June 2018, pp.386–390. 6. L. Miretti, R. L. Cavalcante, and S. Sta´ nczak, “Downlink channel spatial covariance estimation in realistic FDD massive MIMO systems,” in IEEE Global Conf. Signal and Inf. Process. (GlobalSIP), Anaheim, CA, USA, Nov.2018, pp. 161–165. 7. P. Dong, H. Zhang, and G. Y. Li, “Machine learning prediction based CSI acquisition for FDD massive MIMO downlink,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Abu Dhabi, UAE,, 2018, pp.1–6. 8. M. S. Safari, V. Pourahmadi, and S. Sodagari, “Deep UL2DL: Data-driven channel knowledge transfer from uplink to downlink,” IEEE Open J. Veh. Tech., vol.1, pp. 29–44, 2020. 9. M. Alrabeiah and A. Alkhateeb, “Deep learning for TDD and FDD massive MIMO: Mapping channels in space and frequency,” in IEEE Asilomar Conf. Signal, Syst., Compute., Pacific Grove, CA, US,, 2019, pp.1465–1470. 10. Y. Yang, F. Gao, G. Y. Li, and M. Jian, “Deep learning-based downlink channel prediction for FDD massive MIMO system,” IEEE Commun. Lett., vol.23, no.11, pp.1994–1998, 2019. 11. Y. Yang, F. Gao, Z. Zhong, B. Ai, and A. Alkhateeb, “Deep transfer learning based downlink channel prediction for FDD massive MIMO systems,” IEEE Trans. Commun., pp.1–1, 2020. 12. M. Barzegar Khalilsarai, S. Haghighatshoar, X. Yi, and G. Caire, “FDD massive MIMO via UL/DL channel covariance extrapolation and active channel sparsification,” IEEE Trans. Wireless Commun., vol.18, no.1, pp.121–135, 2019. 13. B. Banerjee, R. C. Elliott, W. A. Krzymie´ n, and H. Farmanbar, “Towards FDD massive MIMO: Downlink channel covariance matrix estimation using conditional generative adversarial networks,” in IEEE Annual Int. Symp. Personal, Indoor and Mobile Radio Commun. (PIMRC), Helsinki, Finland, Sept.2021, pp.940–946. 14. A. Decurninge, M. Guillaud, and D. T. M. Slock, “Channel covariance estimation in massive MIMO frequency division duplex systems,” in IEEE Globecom Workshops (GC Wkshps), San Diego, CA, USA, Dec.2015, pp.1–6. 15.3GPP, “5G; study on channel model for frequencies from 0.5 to 100 GHz,” in 3GPP TR 38.901 version 14.0.0 Release 14, May.2017. 16. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” CoRR, vol. abs/1505.04597, 2015. [Online]. arXiv reference number 1505.04597. 17. P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR, vol. abs/1611.07004, 2016. [Online]. arXiv reference number 1611.07004. 18. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Int. Conf. Learning Representations, (ICLR), San Diego, CA, USA, May 2015. [Online]. arXiv reference number 1412.6980. ABBREVIATIONS Abbreviation Explanation BS Base station CNN Convolutional neural networks DOA Direction of arrival DOD Direction of departure FDD Frequency division duplexing GAN Generative adversarial network H-APS Horizontal angular power spectra i.i.d. identically and independently distributed MIMO Massive multiple-input multiple-output OFDM Orthogonal frequency division duplex ReLU Rectified linear unit TDD Time division duplexing UE User equipment ULA Uniform linear arrays V-APS Vertical angular power spectra