文章目录

  • 原理回顾
  • 基于tf2.0组网DeepFM
  • 基于deepctr实现DeepFM

原理回顾

  • 左边用 FM 替换了 Wide&Deep 左边 的 Wide 部分,加强了浅层网络部分特征组合的能力
  • 右边的部分跟 Wide&Deep 的 Deep 部分一样,主要利用 多层神经网络进行所有特征的深层处理
  • 最后的输出层是把 FM 部分的输出和 Deep 部分的输出综合起来,产生最 后的预估结果。这就是 DeepFM 的结构

详细请见这篇文章:

https://blog.csdn.net/qq_42363032/article/details/113696907

基于tf2.0组网DeepFM

import sys, time
import numpy as np
import pandas as pd
from tensorflow.keras.layers import *
import tensorflow.keras.backend as K
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import *
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
from tensorflow.python.keras.models import save_model, load_model
from tensorflow.keras.models import model_from_yaml
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import shuffle
from sklearn.metrics import f1_score, accuracy_score, roc_curve, precision_score, recall_score, roc_auc_scorefrom toolsnn import *class myDeepFM():def __init__(self, data):self.cols = data.columns.values# 定义特征组self.dense_feats = [f for f in self.cols if f[0] == "I"]self.sparse_feats = [f for f in self.cols if f[0] == "C"]''' 处理dense特征 '''def process_dense_feats(self, data, feats):d = data.copy()d = d[feats].fillna(0.0)for f in feats:d[f] = d[f].apply(lambda x: np.log(x + 1) if x > -1 else -1)return d''' 处理sparse特征 '''def process_sparse_feats(self, data, feats):d = data.copy()d = d[feats].fillna("-1")for f in feats:label_encoder = LabelEncoder()d[f] = label_encoder.fit_transform(d[f])return d''' 一阶特征 '''def first_order_features(self):# 构造 dense 特征的输入dense_inputs = []for f in self.dense_feats:_input = Input([1], name=f)dense_inputs.append(_input)# 将输入拼接到一起,方便连接 Dense 层concat_dense_inputs = Concatenate(axis=1)(dense_inputs)  # ?, 13,13是稠密特征维度# 然后连上输出为1个单元的全连接层,表示对 dense 变量的加权求和fst_order_dense_layer = Dense(1)(concat_dense_inputs)  # ?, 1# 构造 sparse 特征的输入,这里单独对每一个 sparse 特征构造输入,目的是方便后面构造二阶组合特征sparse_inputs = []for f in self.sparse_feats:_input = Input([1], name=f)sparse_inputs.append(_input)sparse_1d_embed = []for i, _input in enumerate(sparse_inputs):f = self.sparse_feats[i]voc_size = total_data[f].nunique()# 使用 l2 正则化防止过拟合reg = tf.keras.regularizers.l2(0.5)_embed = Embedding(voc_size, 1, embeddings_regularizer=reg)(_input)# 由于 Embedding 的结果是二维的,因此如果需要在 Embedding 之后加入 Dense 层,则需要先连接上 Flatten 层_embed = Flatten()(_embed)sparse_1d_embed.append(_embed)# 对每个 embedding lookup 的结果 wi 求和fst_order_sparse_layer = Add()(sparse_1d_embed)# Linear部分合并linear_part = Add()([fst_order_dense_layer, fst_order_sparse_layer])return dense_inputs, sparse_inputs, linear_part''' 二阶特征 '''def second_order_features(self, sparse_inputs):# embedding sizek = 8# 只考虑sparse的二阶交叉sparse_kd_embed = []for i, _input in enumerate(sparse_inputs):f = self.sparse_feats[i]voc_size = total_data[f].nunique()reg = tf.keras.regularizers.l2(0.7)_embed = Embedding(voc_size, k, embeddings_regularizer=reg)(_input)sparse_kd_embed.append(_embed)# 1.将所有sparse的embedding拼接起来,得到 (n, k)的矩阵,其中n为特征数,k为embedding大小concat_sparse_kd_embed = Concatenate(axis=1)(sparse_kd_embed)  # ?, n, k# 2.先求和再平方sum_kd_embed = Lambda(lambda x: K.sum(x, axis=1))(concat_sparse_kd_embed)  # ?, ksquare_sum_kd_embed = Multiply()([sum_kd_embed, sum_kd_embed])  # ?, k# 3.先平方再求和square_kd_embed = Multiply()([concat_sparse_kd_embed, concat_sparse_kd_embed])  # ?, n, ksum_square_kd_embed = Lambda(lambda x: K.sum(x, axis=1))(square_kd_embed)  # ?, k# 4.相减除以2sub = Subtract()([square_sum_kd_embed, sum_square_kd_embed])  # ?, ksub = Lambda(lambda x: x * 0.5)(sub)  # ?, ksnd_order_sparse_layer = Lambda(lambda x: K.sum(x, axis=1, keepdims=True))(sub)  # ?, 1return concat_sparse_kd_embed, snd_order_sparse_layer''' DNN部分 '''def dnn(self, concat_sparse_kd_embed):flatten_sparse_embed = Flatten()(concat_sparse_kd_embed)  # ?, n*kfc_layer = Dropout(0.5)(Dense(256, activation='relu')(flatten_sparse_embed))  # ?, 256fc_layer = Dropout(0.3)(Dense(256, activation='relu')(fc_layer))  # ?, 256fc_layer = Dropout(0.1)(Dense(256, activation='relu')(fc_layer))  # ?, 256fc_layer = Dropout(0.1)(Dense(128, activation='relu')(fc_layer))  # ?, 256fc_layer = Dropout(0.1)(Dense(32, activation='relu')(fc_layer))  # ?, 256fc_layer_output = Dense(1)(fc_layer)  # ?, 1return fc_layer_output''' 输出结果 '''def outRes(self, linear_part, snd_order_sparse_layer, fc_layer_output):output_layer = Add()([linear_part, snd_order_sparse_layer, fc_layer_output])output_layer = Activation("sigmoid")(output_layer)return output_layer''' 编译模型 '''def compile_model(self, dense_inputs, sparse_inputs, output_layer):model = Model(dense_inputs + sparse_inputs, output_layer)# model.summary()mNadam = Adam(lr=1e-4, beta_1=0.98, beta_2=0.999)model.compile(optimizer=mNadam,loss="binary_crossentropy",metrics=['Precision', 'Recall', tf.keras.metrics.AUC(name='auc')])return modeldef GeneratorRandomPatchs(train_x, train_y, batch_size):totl, col = np.array(train_x).shape  # (39, 500000)  特征数、样本数# 保证 steps_per_epoch * epoch 批次的数据够while True:for index in range(0, col, batch_size):xs, ys = [], []for t in range(totl):xs.append(train_x[t][index: index + batch_size])ys.append(train_y[0][index: index + batch_size])yield (xs, ys)if __name__ == '__main__':data = pd.read_csv('../../data/criteo_sampled_data.csv')# data = pd.read_csv(sys.argv[1])# data = shuffle(data)print(data.shape)print(list(data.columns))deepFMmodel = myDeepFM(data)# 特征处理data_dense = deepFMmodel.process_dense_feats(data, deepFMmodel.dense_feats)data_sparse = deepFMmodel.process_sparse_feats(data, deepFMmodel.sparse_feats)print('dense count:', len(deepFMmodel.dense_feats))print('spare count:', len(deepFMmodel.sparse_feats))total_data = pd.concat([data_dense, data_sparse], axis=1)total_data['label'] = data['label']print(total_data.head(3))# 一阶特征dense_inputs, sparse_inputs, linear_part = deepFMmodel.first_order_features()# 二阶特征concat_sparse_kd_embed, snd_order_sparse_layer = deepFMmodel.second_order_features(sparse_inputs)# DNN部分fc_layer_output = deepFMmodel.dnn(concat_sparse_kd_embed)# 输出结果output_layer = deepFMmodel.outRes(linear_part, snd_order_sparse_layer, fc_layer_output)# 编译模型model = deepFMmodel.compile_model(dense_inputs, sparse_inputs, output_layer)# 训练train_data = total_data.loc[:500000 - 1]valid_data = total_data.loc[500000:]print('train_data len: ', len(train_data))print('validation_data len: ', len(valid_data))train_dense_x = [train_data[f].values for f in deepFMmodel.dense_feats]train_sparse_x = [train_data[f].values for f in deepFMmodel.sparse_feats]train_x = train_dense_x + train_sparse_xtrain_y = [train_data['label'].values]val_dense_x = [valid_data[f].values for f in deepFMmodel.dense_feats]val_sparse_x = [valid_data[f].values for f in deepFMmodel.sparse_feats]val_x = val_dense_x + val_sparse_xval_y = [valid_data['label'].values]print(train_x)print(train_y)# exit()# model.fit(train_x, train_y,#           batch_size=64, epochs=5, verbose=2,#           validation_data=(val_x, val_y),#           use_multiprocessing=True, workers=4)batch_size = 2048model.fit_generator(GeneratorRandomPatchs(train_x, train_y, batch_size),validation_data=(val_x, val_y),steps_per_epoch=len(train_data) // batch_size,epochs=5,verbose=2,shuffle=True,)# path = '/ad_ctr/data/deepFMmodel-10-27.h5'path = '../../data/my_deepFMmodel-10-27.h5'model.save(path)print(' 模型保存完成', time.strftime("%H:%M:%S", time.localtime(time.time())))modelnew = tf.keras.models.load_model(path)y_pre = modelnew.predict(x=val_x, batch_size=256,)print(type(y_pre))print(y_pre.shape)y = val_y[0]y_pre = [int(i) for i in y_pre]print(' ', set(y_pre))f1 = f1_score(y, y_pre)auc = roc_auc_score(y, y_pre)acc = accuracy_score(y, y_pre)print(' 精确率: %.5f' % (precision_score(y, y_pre)))print(' 召回率: %.5f' % (recall_score(y, y_pre)))print(' F1: %.5f' % (f1))print(' AUC: %.5f' % (auc))print(' 准确率: %.5f' % (acc))accDealWith(y, y_pre)print(' 全量评估完成', time.strftime("%H:%M:%S", time.localtime(time.time())))print('==============================')print()print()print()

基于deepctr实现DeepFM

import os, warnings, time, sys
import pickle
import matplotlib.pyplot as plt
import pandas as pd, numpy as np
from sklearn.utils import shuffle
from sklearn.metrics import f1_score, accuracy_score, roc_curve, precision_score, recall_score, roc_auc_score
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, MinMaxScaler, OneHotEncoder
from deepctr.models import DeepFM, xDeepFM, MLR, DeepFEFM, DIN, AFM
from deepctr.feature_column import SparseFeat, DenseFeat, get_feature_names
from deepctr.layers import custom_objects
from tensorflow.python.keras.models import save_model, load_model
from tensorflow.keras.models import model_from_yaml
import tensorflow as tf
from tensorflow.python.ops import array_ops
import tensorflow.keras.backend as K
from sklearn import datasets
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
from keras.models import model_from_json
from tensorflow.keras.callbacks import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import one_hot
from keras.layers.embeddings import Embeddingfrom toolsnn import *def train_deepFM2():print('DeepFM 模型训练开始 ', time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time())))start_time_start = time.time()# pdtrain:正样本数:565485,负样本数:1133910,正负样本比: 1 : 2.0052# pdtest:正样本数:565485,负样本数:1134505,正负样本比: 1 : 2.0063# pdeval_full:正样本数:46,负样本数:8253,正负样本比: 1 : 179.413pdtrain = pd.read_csv(train_path_ascii)pdtest = pd.read_csv(test_path_ascii)data = pd.concat([pdtrain, pdtest[pdtest['y'] == 0]], axis=0, ignore_index=True)data = data.drop(['WilsonClickRate_all', 'WilsonClickRate_yesterday', 'WilsonAd_clickRate_all','WilsonAd_clickRate_yesterday'], axis=1)# 将 `用户id`、`广告id`、`用户设备`、`多牛广告位id` 用ASCII数值化,转为embedding: 利用卷积原理,将每个字符的ascii码相加作为字符串的数值data['suuid'] = data['suuid'].apply(lambda x: sum([ord(i) for i in x]))data['advertisement'] = data['advertisement'].apply(lambda x: sum([ord(i) for i in x]))# data['position'] = data['position'].apply(lambda x: sum([ord(i) for i in x]))     # 多牛广告位id本身就是float类型,直接embeddingdata['user_modelMake'] = data['user_modelMake'].apply(lambda x: sum([ord(i) for i in x]))# double -> floatdata = transformDF(data, ['reserve_price', 'reserve_price_cpc', 'clickRate_all', 'clickRate_yesterday', 'ad_clickRate_yesterday'], float)'''   特征处理   '''global sparsecols, densecols# 稀疏-onehotsparsecols = ['hour', 'advert_place', 'province_id', 'port_type', 'user_osID', 'is_holidays', 'is_being','is_outflow', 'advertiser', 'ad_from', 'payment']# ascii embeddingsparse_ascii = ['suuid', 'advertisement', 'position', 'user_modelMake']# 稠密-归一化densecols = ['W', 'H', 'reserve_price', 'reserve_price_cpc', 'is_rest_click', 'clickPerHour_yesterday','display_nums_all', 'click_nums_all', 'display_nums_yesterday', 'click_nums_yesterday','ad_display_all', 'ad_click_all', 'ad_display_yesterday', 'ad_click_yesterday']# 稠密-点击率ratecols = ['WHrate', 'clickRate_all', 'clickRate_yesterday', 'ad_clickRate_yesterday']global namesohnamesoh = {}for sparse in sparsecols:onehot = OneHotEncoder()arrays = onehot.fit_transform(np.array(data[sparse]).reshape(-1, 1))# 将onehot后的稀疏矩阵拼回原来的dfarrays = arrays.toarray()names = [sparse + '_' + str(n) for n in range(len(arrays[0]))]namesoh[sparse] = namesdata = pd.concat([data, pd.DataFrame(arrays, columns=names)], axis=1)data = data.drop([sparse], axis=1)# 保存编码规则with open(feature_encode_path.format(sparse) + '.pkl', 'wb') as f:pickle.dump(onehot, f)# print(' {} onehot完成'.format(sparse))print(' onehot完成', time.strftime("%H:%M:%S", time.localtime(time.time())))for dense in densecols:mms = MinMaxScaler(feature_range=(0, 1))data[dense] = mms.fit_transform(np.array(data[dense]).reshape(-1, 1))with open(feature_encode_path.format(dense) + '.pkl', 'wb') as f:pickle.dump(mms, f)# print(' {} 归一化完成'.format(dense))print(' 归一化完成', time.strftime("%H:%M:%S", time.localtime(time.time())))print(' columns: ', len(list(data.columns)))'''   训练集、测试集、验证集划分   '''train_data, test_data = getRata2(data, num=1)_, val_data = train_test_split(test_data, test_size=0.2, random_state=1, shuffle=True)train_data = shuffle(train_data)test_data = shuffle(test_data)val_data = shuffle(val_data)negBpow(train_data, '训练集')negBpow(val_data, '验证集')negBpow(test_data, '测试集')print(' train_data shape: ', train_data.shape)print(' val_data shape: ', val_data.shape)print(' test_data shape: ', test_data.shape)'''   一阶特征   '''sparse_features = []for value in namesoh.values():for v in value:sparse_features.append(v)dense_features = densecols + ratecols'''   二阶特征   '''sparse_feature_columns1 = [SparseFeat(feat, vocabulary_size=int(train_data[feat].max() + 1), embedding_dim=4)for i, feat in enumerate(sparse_features)]sparse_feature_columns2 = [SparseFeat(feat, vocabulary_size=int(train_data[feat].max() + 1), embedding_dim=4)for i, feat in enumerate(sparse_ascii)]sparse_feature_columns = sparse_feature_columns1 + sparse_feature_columns2dense_feature_columns = [DenseFeat(feat, 1)for feat in dense_features]print(' sparse_features count: ', len(sparse_features))print(' dense_features count: ', len(dense_features))'''   DNN   '''dnn_feature_columns = sparse_feature_columns + dense_feature_columns'''   FM   '''linear_feature_columns = sparse_feature_columns + dense_feature_columnsglobal feature_namesfeature_names = get_feature_names(linear_feature_columns + dnn_feature_columns)print(' feature_names: ', feature_names)'''   feed input   '''train_x = {name: train_data[name].values for name in feature_names}test_x = {name: test_data[name].values for name in feature_names}val_x = {name: val_data[name].values for name in feature_names}train_y = train_data[['y']].valuestest_y = test_data[['y']].valuesval_y = val_data[['y']].valuesprint(' 数据处理完成', time.strftime("%H:%M:%S", time.localtime(time.time())))# print(' train_model_input: ', train_x)# print(' val_model_input: ', val_x)# print('train_y: ', train_y, train_y.shape)deep = DeepFM(linear_feature_columns, dnn_feature_columns,dnn_hidden_units=(256, 128, 64, 32, 1),l2_reg_linear=0.01, l2_reg_embedding=0.01,dnn_dropout=0.2,dnn_activation='relu', dnn_use_bn=True, task='binary')mNadam = Adam(lr=1e-4, beta_1=0.95, beta_2=0.96)deep.compile(optimizer=mNadam, loss='binary_crossentropy',metrics=['AUC', 'Precision', 'Recall'])print(' 组网完成', time.strftime("%H:%M:%S", time.localtime(time.time())))print(' 训练开始 ', time.strftime("%H:%M:%S", time.localtime(time.time())))start_time = time.time()'''   训练   '''# 早停止:验证集精确率上升幅度小于min_delta,训练停止earlystop_callback = EarlyStopping(monitor='val_precision', min_delta=0.001, mode='max',verbose=2, patience=3)generator_flag = False    # fit# generator_flag = True       # fit_generatorif not generator_flag:history = deep.fit(train_x, train_y, validation_data=(val_x, val_y),batch_size=2000,epochs=3000,verbose=2,shuffle=True,# callbacks=[earlystop_callback])else:batch_size = 2000train_nums = len(train_data)history = deep.fit_generator(GeneratorRandomPatchs(train_x, train_y, batch_size, train_nums, feature_names),validation_data=(val_x, val_y),steps_per_epoch=train_nums // batch_size,epochs=3000,verbose=2,shuffle=True,# callbacks=[earlystop_callback])end_time = time.time()print(' 训练完成', time.strftime("%H:%M:%S", time.localtime(time.time())))print((' 训练运行时间: {:.0f}分 {:.0f}秒'.format((end_time - start_time) // 60, (end_time - start_time) % 60)))# 模型保存成yaml文件save_model(deep, save_path)print(' 模型保存完成', time.strftime("%H:%M:%S", time.localtime(time.time())))# 训练可视化visualization(history, saveflag=True, showflag=False, path1=loss_plt_path.format('loss_auc.jpg'), path2=loss_plt_path.format('precision_recall.jpg'))# 测试集评估scores = deep.evaluate(test_x, test_y, verbose=0)print(' %s: %.4f' % (deep.metrics_names[0], scores[0]))print(' %s: %.4f' % (deep.metrics_names[1], scores[1]))print(' %s: %.4f' % (deep.metrics_names[2], scores[2]))print(' %s: %.4f' % (deep.metrics_names[3], scores[3]))print(' %s: %.4f' % ('F1', (2*scores[2]*scores[3])/(scores[2]+scores[3])))print(' 验证集再评估完成', time.strftime("%H:%M:%S", time.localtime(time.time())))# 全量评估full_evaluate2()end_time_end = time.time()print(('DeepFM 模型训练运行时间: {:.0f}分 {:.0f}秒'.format((end_time_end - start_time_start) // 60, (end_time_end - start_time_start) % 60)))print(('{:.0f}小时'.format((end_time_end - start_time_start) // 60 / 60)))

tf2.0 实现DeepFM相关推荐

  1. TensorFlow2.0正式版发布,极简安装TF2.0(CPUGPU)教程

    作者 | 小宋是呢 转载自CSDN博客 [导读]TensorFlow 2.0,昨天凌晨,正式放出了2.0版本. 不少网友表示,TensorFlow 2.0比PyTorch更好用,已经准备全面转向这个新 ...

  2. Transformers2.0让你三行代码调用语言模型,兼容TF2.0和PyTorch

    Transformers2.0让你三行代码调用语言模型,兼容TF2.0和PyTorch 能够灵活地调用各种语言模型,一直是 NLP 研究者的期待.近日 HuggingFace 公司开源了最新的 Tra ...

  3. 谷歌TF2.0凌晨发布!“改变一切,力压PyTorch”

    问耕 发自 凹非寺 量子位 出品 | 公众号 QbitAI TensorFlow 2.0终于来了! 今天凌晨,这个全球用户最多的深度学习框架,正式放出了2.0版本. Google深度学习科学家.Ker ...

  4. 【CTR模型】TensorFlow2.0 的 DeepFM 实现与实战(附代码+数据)

    CTR 系列文章: 广告点击率(CTR)预测经典模型 GBDT + LR 理解与实践(附数据 + 代码) CTR经典模型串讲:FM / FFM / 双线性 FFM 相关推导与理解 CTR深度学习模型之 ...

  5. 利用python安装opencv_科学网—Anaconda Python PyCharm PyQT5 OpenCV PyTorch TF2.0 安装指南 - 张重生的博文...

    Anaconda Python PyCharm PyQT5 OpenCV PyTorch TF2.0 安装指南与资料汇总 (用Anaconda配置Python集成开发环境,含Python3, PyQT ...

  6. 如何用tf2.0训练中文聊天机器人chatbot

    向AI转型的程序员都关注了这个号???????????? 机器学习AI算法工程   公众号:datayx 一个可以自己进行训练的中文聊天机器人, 根据自己的语料训练出自己想要的聊天机器人,可以用于智能 ...

  7. TF2.0 subclass存储及读取模型

    Tensorflow Subclass存储问题 问题描述:项目中通过tf.keras.layer.Layers及f.keras.layer.Model进行构建模型,在存储的过程中能够存储自己的模型,但 ...

  8. tensorflow2.0实现DeepFM

    文章目录 数据预处理 模型的构建与训练 FM部分 一阶特征 二阶(交叉)特征 DNN部分 组合FM和DNN 模型训练 本文基于tensorflow2.0实现的DeepFM 结构.数据集: Criteo ...

  9. tf2.0不降版本也能完美解决module ‘tensorflow’ has no attribute ‘contrib’的问题

    tf2.0不降版本也能完美解决module 'tensorflow' has no attribute 'contrib'的问题 看图 tf2.0版本更改 我在学习的过程中,发现了大佬们写的项目都是在 ...

最新文章

  1. @所有城市:想建AI智算中心的看这里!国家认可的那种
  2. 在python中terminal中建立mysql数据库,无法再models.py 文件中建立数据库信息
  3. hdu 5167 Fibonacci(预处理)
  4. 宽客的人amp;amp;事件映射
  5. python 实现原型设计模式
  6. AT649 自由研究
  7. html猜随机数游戏,用js制作简易计算器及猜随机数字游戏
  8. 关于RICHEDIT的两个问题
  9. 图像处理--知识点整理
  10. openwrt源码下载
  11. OPENCV 函数cvCreateMat
  12. 如何写好一篇议论文章
  13. iOS 12.0-12.1.2 完整越狱教程
  14. 计算机网络 数据链路层 数据链路层的作用
  15. 有监督学习、无监督学习、半监督学习、强化学习
  16. 笔记本电脑怎么打不开计算机,笔记本电脑打不开了怎么办
  17. Bigder:53/100 真香免费网站!在线练习SQL\Python\Shell像游戏通关一样刷题
  18. 绘制渐变图形--Canvas的基本操作
  19. Linux nginx 项目部署
  20. EUV极紫外光刻技术

热门文章

  1. java图片头像代码_用Java和OpenCV生成Github默认头像
  2. android 调用相机拍照。适配到 Android 10
  3. 哈萨克语驾考 科目一四驾驶证学车考试题库
  4. 前端入门学习之 html5
  5. 中国地质大学(武汉)计算机考研资料汇总
  6. LSTM 01:理解LSTM原理及训练方法
  7. 一个简单的监控系统的设计
  8. 移动支付技术崛起 多功能集成的趋势
  9. java 8之函数编程自定义函数接口@FunctionalInterface
  10. 磁场强度切向分量连续性证明