文章学习资源:https://sci-hub.do/10.1109/lwc.2018.2818160

学习笔记,不完全翻译,上下文大致理解,大家多多提意见。

Abstract:

In frequency division duplex mode, the downlink channel state information (CSI) should be sent to the base station through feedback links so that the potential gains of a massive multiple-input multiple-output can be exhibited. (下行链路信道状态信息(CSI)应该通过反馈链路发送到基站)
        However, such a transmission is hindered by excessive feedback overhead. (受到过多反馈开销的影响)
        In this letter, we use deep learning technology to develop CsiNet, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples. (利用深度学习技术来开发CsiNet,并且这是一种新的感知和恢复机制,并且学习如何从训练样本中有效地使用信道结构

CsiNet learns a transformation from CSI to a near-optimal number of representations (or codewords) and an inverse transformation from codewords to CSI. (转换和逆转换)

We perform experiments to demonstrate that CsiNet can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods. (恢复CSI)

Even at excessively low compression regions where CS-based methods cannot work, CsiNet retains effective beamforming gain.  (CsiNet的优势之一)

Introduction:

However, the feedback quantities resulting from these approaches need to be scaled linearly with the number of transmit antennas and are prohibitive in a massive MIMO regime. (由这些方法产生的反馈量与发射天线的数量成线性比例,并且在大规模MIMO方案中是禁用)

In particular, correlated CSI can be transformed into an uncorrelated sparse vector in some bases;  thus, one can use compressive sensing (CS) to obtain a sufficiently accurate estimate of a sparse vector from an underdetermined linear system. (可以使用压缩感测(CS)从欠定线性系统中获得对稀疏向量的足够准确的估计)

This concept has inspired the establishment of CSI feedback protocols based on CS and distributed compressive channel estimation. (基于CS和分布式压缩信道估计的CSI反馈协议)

However, these algorithms struggle to recover compressive CSI because they use a simple sparsity prior while their channel matrix is not perfectly but is approximately sparse.(现有的一些算法(文献5,6)很难去回复压缩后的CSI)。

Moreover, the changes among most adjacent elements in the channel matrix are subtle. These properties complicate modeling their priors.(信道相邻元素之间的变化微小)

需要关注的问题:Although researchers have designed advanced algorithms (e.g., TVAL3 [7] and BM3D-AMP [8]) that can impose elaborate priors on reconstruction, these algorithms do not significantly boost CSI recovery quality because hand-crafted priors remain far from practice.(算法有改进,有提高,但是不能显著提高CSI恢复的质量)

目前,基于CS的方法固有的三个中心问题:

(1)First, they rely heavily on the assumption that channels are sparse in some bases. (不完全稀疏问题)

(2)Second, CS uses random projection and does not fully exploit channel structures. (CS使用随机映射,不能充分利用信道结构)

(3)Third, existing CS algorithms for signal reconstruction are often iterative approaches, which have slow re-construction. (现有信号重建采用的算法是迭代方法,效率较低)

本文基于DL做了开发,CSI传感(encoder)恢复(decoder)。CsiNet具有以下功能:

(1)Encoder

Rather than using random projection, CsiNet learns a transformation from original channel matrices to compress representations (codewords) through training data. (DL 学习从原始信道矩阵到压缩表示形式的转换过程)

(2)Decoder

CsiNet learns inverse transformation from codewords to original channels. Inverse transformation is noniterative and multiple orders of magnitude faster than iterative algorithms.(CSINiet 学习逆向转换,并且是非迭代的,这种非迭代的算法性能高于迭代算法性能)

Although DL exhibits state-of- the-art performance in natural image reconstruction, whether DL can also show its ability in wireless channel reconstruction is unclear because this reconstruction is more sophisticated than image reconstruction.(不确定DL重建在无线信道中的重建功能,相比于图像处理的过程) (因此引出本文重点工作:提出基于DL的CSI减少和恢复方法的建议)

Even reconstructions at an excessively low compression rate retain sufficient content that allows effective beamforming gain.

SYSTEM MODEL AND CSI FEEDBACK

CSINET

We exploit the recent and popular conventional neural networks (CNNs) for the encoder and decoder they can exploit spatial local correlation by enforcing a local connectivity pattern among the neurons of adjacent layers. (使用CNNs作为编码译码器)(建立局部连通模式)

The overview of the proposed DL architecture, named CsiNet, is shown in Fig. 1(b), in which the values S1 ×S2 ×S3 denote the length, width, and number of feature maps, respectively.

The first layer of the encoder is a convolutional layer with the real and imaginary parts of H being its input. This layer uses kernels with dimensions of 3 × 3 to generate two feature maps. (第一层卷积层生成两个特征图)

Following the convolutional layer, we reshape the feature maps into a vector and use a fully connected layer to generate the codeword s, which is a real-valued vector of size M. (将特征映射为向量,使用全连接层进行译码,是一个大小为M的实值向量)

The first two layers mimic the projection of CS and serve as encoders. However, in contrast to random projections in CS, CsiNet attempts to translate the extracted feature maps into a codeword.

Once we obtain the codeword s, we use several layers (as a decoder) to map it back into the channel matrix H. (解码,将codewords解码为channel matrix)

The first layer of the decoder is a fully connected layer that considers s as input and outputs two matrices of size Nc × Nt, which serve as an initial estimate of the real and imaginary parts of H. (解码从全连接层开始输出两个矩阵分别作为real and imaginary)

The initial estimate is then fed into several “RefineNetunits” that continuously refine the reconstruction.

Each RefineNet unit consists of four layers, as shown in Fig. 1(b). In RefineNet unit, the first layer is the input layer.

All the remaining 3 layers use 3 × 3 kernels.

The second and third layers generate 8 and 16 feature maps, respectively, and the final layer generates the final reconstruction of H.

Using appropriate zero padding, the feature maps produced by the three convolutional layers are set to the same size as the input channel matrix size Nc × Nt. (适当填充0,将卷积层产生的特征映射与输入通道矩阵保持一致,即:Nc × Nt)

The rectified linear unit (ReLU), ReLU(x) = max(x, 0), is used as the activation function, and we introduce batch normalization to each layer.(使用ReLU作为激活函数,进行量化归一操作)

In contrast to conventional implementations, our target is refinement rather than dimensional-ity reduction. (采用的是目标细化,而非是减低维数)

Second, in the RefineNet unit, we introduce identity shortcut connections that directly pass data flow to later layers. (向后进行直接数据流的传输)

This approach is inspired by the deep Residual Network [12, 17],which avoids the vanishing gradient problem caused by multiple stacked nonlinear transformations.(采用的是文献12.17中的深度残差网络,目的是避免多重叠非线性变换引起的小时梯度问题)

EXPERIMENTS

We compare CsiNet with three state-of-the-art CS-based methods, namely, LASSO l1-solver [5], TVAL3 [7], and BM3D-AMP [8].

In all experiments, we assume that the optimal regularization parameter of LASSO is given by an oracle.

LASSO provides the bottom-line result of the CS problem by considering only the simplest sparsity prior.

TVAL3 is a remarkably fast total variation-based recovery algorithm that considers increasingly elaborate priors.

BM3D-AMP is the most accurate compressive recovery algorithm in natural-image reconstruction.

 This paper:We also provide the corresponding results for CS-CsiNet, which only learns to recover CSI from CS measurements (or random linear measurements). The architecture of CS-CsiNet is identical to that of the decoder of CsiNet.

回复后的信道和原来的信道是通过归一化MSE来量化,

CsiNet obtains the lowest NMSE values and significantly outperforms CS-based methods at all compression ratios. (CsiNet 的NMSE值最低,效果显著)

Compared with CS-CsiNet, CsiNet also provides significant gains, which are due to the sophisticated DL architecture in the encoder and decoder.(显著提升得益于编码器和解码器中复杂的DL结构)

When the compression ratio is reduced to 1/16, the CS-based methods can no longer function, whereas CsiNet and CS-CsiNet continue to perform well.(压缩比例为1/16时,CS不在起作用,而CsiNet和CS-CsiNet效果明显)

Fig. 2 shows some reconstruction samples at different compression ratios along with the corresponding pseudo-gray plots of the strength of H. CsiNet clearly outperforms the other algorithms.(不听压缩比例下重构样本的CsiNet的相应伪灰度图,同时,CsiNet的性能明显好于其他算法)

Furthermore, CSI recovery through CsiNet can be executed with a relatively lower overhead than that through CS-based algorithms because CsiNet requires only several layers of simple matrix-vector multiplications. (CsiNet和CS相比,进行CSI恢复的开销要低一些。)

Specifically, the average running times (in seconds) of LASSO, BM3D-AMP, TVAL3, and CsiNet are 0.1828, 0.5717, 0.3155, and 0.0035, respectively. CsiNet performs approximately 52 to 163 times faster than CS-based methods.(从执行速度来看,CsiNet 的速度是CS的52-163倍)

Finally, we provide some observations without showing their experimental details.

First, the DFT matrix Fa that is used to transform H ̃ from the spatial domain into the angular domain is unnecessary.

CsiNet can also exhibit similar performances without employing Fa when retraining entire layers. This finding implies that CsiNet can be applied in other antenna configurations. (应用配置更广一些)

Second, angular (or spatial) resolution increases with the number of antennas at the BS.

Accordingly, the reconstruction performances of all the algorithms improve because H becomes sparser. (由于H的稀疏,重建性能都得到了提高)

CsiNet can be significantly improved because it is more capable of exploiting subtle changes among adjacent elements than CS-based methods.

CONCLUSION

We used DL in CsiNet, a novel CSI sensing and recovery mechanism.

CsiNet performed well at low compression ratios and reduced time complexity.

Deep Learning for Massive MIMO CSI Feedback-学习笔记相关推荐

  1. Deep Learning for Massive MIMO CSI Feedback

    这篇文章是自己之前学习论文的一点心得,是源于AI+无线通信这个比赛. 论文百度搜这个,去IEEE官网就可以下载了.[C. Wen, W. Shih and S. Jin, "Deep Lea ...

  2. Convolutional Neural Network based Multiple-Rate Compressive Sensing for Massive MIMO CSI Feedback:

    Convolutional Neural Network based Multiple-Rate Compressive Sensing for Massive MIMO CSI Feedback: ...

  3. 【Deep Learning】VGG16之feature map学习笔记

    最近学习BeautyGAN需要用到VGG16提取的feature map进行训练,简单学习了一些关于VGG16和feature map相关的内容. VGG16网络结构 VGG16总共有16层,13个卷 ...

  4. 《Deep Learning from Scratch》鱼书学习笔记 3-6,7 手写数字的识别

    3.6 手写数字的识别 3.6.1 MNIST 数据集 import sys, os sys.path.append(os.pardir) # 为了导入父目录中的文件而进行的设定 from datas ...

  5. python数据库开发 dga_DGA detection based on Deep Learning (CNN and GRU) (基于深度学习的DGA检测)...

    DGA-detection DGA detection based on Deep Learning (CNN and GRU) (基于深度学习的DGA检测) This project impleme ...

  6. 训练技巧《Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)》学习笔记

    原 <Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)>学习笔记 2019年01月19日 22:20:40 咸 ...

  7. Machine Learning(吴恩达) 学习笔记(一)

    Machine Learning(吴恩达) 学习笔记(一) 1.什么是机器学习? 2.监督学习 3.无监督学习 4.单变量线性回归 4.1代价函数 4.2 梯度下降 5.代码回顾 最近在听吴恩达老师的 ...

  8. Deep Learning Trends @ ICLR 2016:深度学习趋势@ICLR2016(译)

    Preface   这是一篇译文,原文作者是Tomasz Malisiewicz大神,这是他在博客Tombone's Computer Vision Blog的文章,一发出来就引起这个圈子的广泛关注. ...

  9. RelExt: Relation Extraction using Deep Learning approaches for Cybersecurity Knowledge Graph 阅读笔记

    RelExt: Relation Extraction using Deep Learning approaches for Cybersecurity Knowledge Graph Improve ...

最新文章

  1. 挪动以太坊:比特币现金的新功能使其成为智能合约竞争者
  2. 【BIM入门实战】Revit 2018模型设计阶段重点及注意事项总结
  3. 04.监控过程组-偏差分析
  4. gitlab ci php 构建,GitLab CI的入门搭建
  5. iOS cell添加点击时改变字体的颜色及背景
  6. oracle 查虚拟路径,Oracle 11g RMAN虚拟私有目录
  7. delphi 远程mysql_Delphi远程连接Mysql的实现方法
  8. CMPP3.0状态报告状态码
  9. linux mysql端口被占用解决方法_3306端口被占用导致MySQL无法启动
  10. Mat 转 IplImage
  11. Win10 如何将40G大文件极致压缩
  12. 青少年软件编程C++二级题库(11-20)
  13. mysql表误删回复_MySQL数据库误删恢复
  14. ThinkPad_E570 拆机清灰换硅脂
  15. 一文了解SAP Ariba是什么?
  16. oracle11g 没有scott,Oracle11g中没有scott用户怎么办啊???
  17. 上传文件删除上传文件——前端layui
  18. matlab如何下载对应版本的runtime
  19. 二叉排序树(查找树)平均查找长度(成功和不成功)
  20. DOS操作系统、常用DOS命令简介

热门文章

  1. 微软下一代集成开发环境 – Visual Studio 2019
  2. STC89C52单片机定时器及中断系统的介绍以及代码示例
  3. 提升网站黏着度的技术手段其实跟“搞对象”完全一样,也有“潜规则”
  4. 数据结构进阶(Go语言)
  5. vue2使用element日期选择控件显示农历数据
  6. 神经网络:AlexNet
  7. 序列化解决方案,就是采用二进制通信协议(数据报文格式)
  8. python爬虫解析数据包_Python网络爬虫之三种数据解析方式
  9. 就在刚刚 Kubernetes 1.25 正式发布,包括这些重大变化
  10. java.sql.SQLException: Access denied for user ''@'localhost' (using password: NO)问题解决,很详细,很详细,很详细