一、什么是 Alink?

​ Alink 是阿里巴巴计算平台事业部PAI团队从 2017 年开始基于实时计算引擎 Flink 研发的新一代机器学习算法平台,提供丰富的算法组件库和便捷的操作框架,开发者可以一键搭建覆盖数据处理、特征工程、模型训练、模型预测的算法模型开发全流程。

​ 借助Flink在批流一体化方面的优势,Alink能够为批流任务提供一致性的操作。在实践过程中,Flink原有的机器学习库FlinkML的局限性显露出来(仅支持10余种算法,支持的数据结构也不够通用),但我们看重Flink底层引擎的优秀性能,于是基于Flink重新设计研发了机器学习算法库,于2018年在阿里集团内部上线,随后不断改进完善,在阿里内部错综复杂的业务场景中锻炼成长。

二、FlinkML 和 Alink 的关系

​ FlinkML 是 Flink 社区现存的一套机器学习算法库,这一套算法库已经存在很久而且更新比较缓慢。Alink 是基于新一代的 Flink,完全重新写了一套,跟 FlinkML 没有代码上的关系。Alink 由阿里巴巴计算平台事业部PAI团队研发,开发出来以后在阿里巴巴内部也用了,然后现在正式开源出来。

三、Alin机器学习案例

案例1:

准备环境

from pyalink.alink import *
resetEnv()
useLocalEnv(1, config=None)

Use one of the following commands to start using PyAlink:

  • useLocalEnv(parallelism, flinkHome=None, config=None)
  • useRemoteEnv(host, port, parallelism, flinkHome=None, localIp=“localhost”, config=None)
    Call resetEnv() to reset environment and switch to another.

JVM listening on 127.0.0.1:50568
MLEnv(benv=JavaObject id=o2, btenv=JavaObject id=o5, senv=JavaObject id=o3, stenv=JavaObject id=o6)

数据准备

## read data
URL = "https://alink-release.oss-cn-beijing.aliyuncs.com/data-files/review_rating_train.csv"
SCHEMA_STR = "review_id bigint, rating5 bigint, rating3 bigint, review_context string"
LABEL_COL = "rating5"
TEXT_COL = "review_context"
VECTOR_COL = "vec"
PRED_COL = "pred"
PRED_DETAIL_COL = "predDetail"
source = CsvSourceBatchOp() \.setFilePath(URL)\.setSchemaStr(SCHEMA_STR)\.setFieldDelimiter("_alink_")\.setQuoteChar(None)## Split data for train and test
trainData = SplitBatchOp().setFraction(0.9).linkFrom(source)
testData = trainData.getSideOutput(0)

特征工程

pipeline = (Pipeline().add(Segment().setSelectedCol(TEXT_COL)).add(StopWordsRemover().setSelectedCol(TEXT_COL)).add(DocHashCountVectorizer().setFeatureType("WORD_COUNT").setSelectedCol(TEXT_COL).setOutputCol(VECTOR_COL))
)

模型训练

## naiveBayes model
naiveBayes = (NaiveBayesTextClassifier().setVectorCol(VECTOR_COL).setLabelCol(LABEL_COL).setPredictionCol(PRED_COL).setPredictionDetailCol(PRED_DETAIL_COL)
)
%timeit  model = pipeline.add(naiveBayes).fit(trainData)

473 ms ± 160 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

数据预测评估

## evaluation
predict = model.transform(testData)
metrics = (EvalMultiClassBatchOp().setLabelCol(LABEL_COL).setPredictionDetailCol(PRED_DETAIL_COL).linkFrom(predict).collectMetrics()
)

打印评估结果

print("ConfusionMatrix:", metrics.getConfusionMatrix())
print("LabelArray:", metrics.getLabelArray())
print("LogLoss:", metrics.getLogLoss())
print("Accuracy:", metrics.getAccuracy())
print("Kappa:", metrics.getKappa())
print("MacroF1:", metrics.getMacroF1())
print("Label 1 Accuracy:", metrics.getAccuracy("1"))
print("Label 1 Kappa:", metrics.getKappa("1"))
print("Label 1 Precision:", metrics.getPrecision("1"))

ConfusionMatrix: [[4987, 327, 229, 204, 292], [28, 1223, 164, 147, 108], [1, 1, 269, 10, 11], [0, 0, 0, 10, 0], [0, 2, 1, 2, 83]]
LabelArray: [‘5’, ‘4’, ‘3’, ‘2’, ‘1’]
LogLoss: 2.330945631084851
Accuracy: 0.8114582047166317
Kappa: 0.6190950197563011
MacroF1: 0.5123859853163818
Label 1 Accuracy: 0.9486356340288925
Label 1 Kappa: 0.27179135595030096
Label 1 Precision: 0.9431818181818182

案例2:

环境准备

# set env
from pyalink.alink import *
import sys, os
resetEnv()
useLocalEnv(2)

Use one of the following command to start using pyalink:
使用以下一条命令来开始使用 pyalink:

  • useLocalEnv(parallelism, flinkHome=None, config=None)
  • useRemoteEnv(host, port, parallelism, flinkHome=None, localIp=“localhost”, config=None)
    Call resetEnv() to reset environment and switch to another.
    使用 resetEnv() 来重置运行环境,并切换到另一个。

JVM listening on 127.0.0.1:51134
JavaObject id=o6

数据准备

# schema of train data
schemaStr = "id string, click string, dt string, C1 string, banner_pos int, site_id string, \site_domain string, site_category string, app_id string, app_domain string, \app_category string, device_id string, device_ip string, device_model string, \device_type string, device_conn_type string, C14 int, C15 int, C16 int, C17 int, \C18 int, C19 int, C20 int, C21 int"# prepare batch train data
batchTrainDataFn = "http://alink-release.oss-cn-beijing.aliyuncs.com/data-files/avazu-small.csv"
trainBatchData = CsvSourceBatchOp().setFilePath(batchTrainDataFn) \.setSchemaStr(schemaStr) \.setIgnoreFirstLine(True);
# feature fit
labelColName = "click"
vecColName = "vec"
numHashFeatures = 30000
selectedColNames =["C1","banner_pos","site_category","app_domain","app_category","device_type","device_conn_type", "C14","C15","C16","C17","C18","C19","C20","C21","site_id","site_domain","device_id","device_model"]categoryColNames = ["C1","banner_pos","site_category","app_domain", "app_category","device_type","device_conn_type","site_id","site_domain","device_id","device_model"]numericalColNames = ["C14","C15","C16","C17","C18","C19","C20","C21"]# prepare stream train data
wholeDataFile = "http://alink-release.oss-cn-beijing.aliyuncs.com/data-files/avazu-ctr-train-8M.csv"
data = CsvSourceStreamOp() \.setFilePath(wholeDataFile) \.setSchemaStr(schemaStr) \.setIgnoreFirstLine(True);# split stream to train and eval data
spliter = SplitStreamOp().setFraction(0.5).linkFrom(data)
train_stream_data = spliter
test_stream_data = spliter.getSideOutput(0)

在线学习五步骤

  • *步骤一、特征工程*
  • *步骤二、批式模型训练*
  • *步骤三、在线模型训练(FTRL)*
  • *步骤四、在线预测*
  • *步骤五、在线评估*

特征工程

# setup feature enginerring pipeline
feature_pipeline = Pipeline() \.add(StandardScaler() \.setSelectedCols(numericalColNames)) \.add(FeatureHasher() \.setSelectedCols(selectedColNames) \.setCategoricalCols(categoryColNames) \.setOutputCol(vecColName) \.setNumFeatures(numHashFeatures))# fit and save feature pipeline model
FEATURE_PIPELINE_MODEL_FILE = os.path.join(os.getcwd(), "feature_pipe_model.csv")
feature_pipeline.fit(trainBatchData).save(FEATURE_PIPELINE_MODEL_FILE);BatchOperator.execute();# load pipeline model
feature_pipelineModel = PipelineModel.load(FEATURE_PIPELINE_MODEL_FILE);

批式模型训练

# train initial batch model
lr = LogisticRegressionTrainBatchOp()
%timeit  initModel = lr.setVectorCol(vecColName) \.setLabelCol(labelColName) \.setWithIntercept(True) \.setMaxIter(10) \.linkFrom(feature_pipelineModel.transform(trainBatchData))
59.6 ms ± 14.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

在线模型训练(FTRL)

# ftrl train
model = FtrlTrainStreamOp(initModel) \.setVectorCol(vecColName) \.setLabelCol(labelColName) \.setWithIntercept(True) \.setAlpha(0.1) \.setBeta(0.1) \.setL1(0.01) \.setL2(0.01) \.setTimeInterval(10) \.setVectorSize(numHashFeatures) \.linkFrom(feature_pipelineModel.transform(train_stream_data))

19.1 s ± 1.9 s per loop (mean ± std. dev. of 7 runs, 1 loop each)

在线预测

# ftrl predict
predResult = FtrlPredictStreamOp(initModel) \.setVectorCol(vecColName) \.setPredictionCol("pred") \.setReservedCols([labelColName]) \.setPredictionDetailCol("details") \.linkFrom(model, feature_pipelineModel.transform(test_stream_data))predResult.print(key="predResult", refreshInterval = 30, maxLimit=20)
'DataStream predResult: (Updated on 2019-12-05 15:03:33)'
click pred details
0 0 0 {“0”:“0.9046159047711626”,“1”:"0.0953840952288…
1 1 0 {“0”:“0.7301554114492774”,“1”:"0.2698445885507…
2 0 0 {“0”:“0.9354702479573089”,“1”:"0.0645297520426…
3 1 0 {“0”:“0.7472443769874088”,“1”:"0.2527556230125…
4 0 0 {“0”:“0.7313933609276811”,“1”:"0.2686066390723…
5 0 0 {“0”:“0.7579078017993002”,“1”:"0.2420921982006…
6 0 0 {“0”:“0.9658883764493819”,“1”:"0.0341116235506…
7 0 0 {“0”:“0.8916428187684737”,“1”:"0.1083571812315…
8 0 0 {“0”:“0.964470362868512”,“1”:"0.03552963713148…
9 0 0 {“0”:“0.7879843998010425”,“1”:"0.2120156001989…
10 0 0 {“0”:“0.7701207324521978”,“1”:"0.2298792675478…
11 0 0 {“0”:“0.8816330561252186”,“1”:"0.1183669438747…
12 0 0 {“0”:“0.8671197714269967”,“1”:"0.1328802285730…
13 0 0 {“0”:“0.9355228418514457”,“1”:"0.0644771581485…
14 0 0 {“0”:“0.9098863130943347”,“1”:"0.0901136869056…
15 0 0 {“0”:“0.7917622336863489”,“1”:"0.2082377663136…
16 0 0 {“0”:“0.8377318499121809”,“1”:"0.1622681500878…
17 0 0 {“0”:“0.9647915025127575”,“1”:"0.0352084974872…
18 0 0 {“0”:“0.7313985049080408”,“1”:"0.2686014950919…
19 1 0 {“0”:“0.8541619467983884”,“1”:"0.1458380532016…

在线评估

# ftrl eval
EvalBinaryClassStreamOp() \.setLabelCol(labelColName) \.setPredictionCol("pred") \.setPredictionDetailCol("details") \.setTimeInterval(10) \.linkFrom(predResult) \.link(JsonValueStreamOp() \.setSelectedCol("Data") \.setReservedCols(["Statistics"]) \.setOutputCols(["Accuracy", "AUC", "ConfusionMatrix"]) \.setJsonPath(["$.Accuracy", "$.AUC", "$.ConfusionMatrix"])) \.print(key="evaluation", refreshInterval = 30, maxLimit=20)
StreamOperator.execute();
'DataStream evaluation: (Updated on 2019-12-05 15:03:31)'
Statistics Accuracy AUC ConfusionMatrix
0 all 0.8286096670786908 0.7182165258211499 [[5535,5007],[112297,561587]]
1 window 0.8464953470502861 0.7283501551891348 [[485,456],[8534,49090]]
2 all 0.830019475336848 0.7191075542108774 [[6020,5463],[120831,610677]]
3 window 0.8455799884444143 0.7227709897015594 [[512,416],[8671,49247]]
4 all 0.8311614455307001 0.719465721678977 [[6532,5879],[129502,659924]]
5 window 0.8444954128440367 0.7259189182276968 [[545,455],[8698,49162]]
6 all 0.8320733080282608 0.7199603254520217 [[7077,6334],[138200,709086]]

案例3:

准备环境

from pyalink.alink import *
resetEnv()
useLocalEnv(1, config=None)
Use one of the following command to start using pyalink:
使用以下一条命令来开始使用 pyalink:- useLocalEnv(parallelism, flinkHome=None, config=None)- useRemoteEnv(host, port, parallelism, flinkHome=None, localIp="localhost", config=None)
Call resetEnv() to reset environment and switch to another.
使用 resetEnv() 来重置运行环境,并切换到另一个。JVM listening on 127.0.0.1:57785
JavaObject id=o6

数据准备

## prepare data
import numpy as np
import pandas as pd
data = np.array([[0, 0.0, 0.0, 0.0],[1, 0.1, 0.1, 0.1],[2, 0.2, 0.2, 0.2],[3, 9, 9, 9],[4, 9.1, 9.1, 9.1],[5, 9.2, 9.2, 9.2]
])
df = pd.DataFrame({"id": data[:, 0], "f0": data[:, 1], "f1": data[:, 2], "f2": data[:, 3]})
inOp = BatchOperator.fromDataframe(df, schemaStr='id double, f0 double, f1 double, f2 double')
FEATURE_COLS = ["f0", "f1", "f2"]
VECTOR_COL = "vec"
PRED_COL = "pred"

数据预处理

vectorAssembler = (VectorAssembler().setSelectedCols(FEATURE_COLS).setOutputCol(VECTOR_COL)
)

聚类训练

kMeans = (KMeans().setVectorCol(VECTOR_COL).setK(2).setPredictionCol(PRED_COL)
)

数据预测

pipeline = Pipeline().add(vectorAssembler).add(kMeans)
%timeit  pipeline.fit(inOp).transform(inOp).firstN(9).collectToDataframe()
id f0 f1 f2 vec pred
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1
1 1.0 0.1 0.1 0.1 0.1 0.1 0.1 1
2 2.0 0.2 0.2 0.2 0.2 0.2 0.2 1
3 3.0 9.0 9.0 9.0 9.0 9.0 9.0 0
4 4.0 9.1 9.1 9.1 9.1 9.1 9.1 0
5 5.0 9.2 9.2 9.2 9.2 9.2 9.2 0

301 ms ± 25.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

from pyalink.alink import *
resetEnv()
useLocalEnv(1, config=None)
Use one of the following command to start using pyalink:
使用以下一条命令来开始使用 pyalink:- useLocalEnv(parallelism, flinkHome=None, config=None)- useRemoteEnv(host, port, parallelism, flinkHome=None, localIp="localhost", config=None)
Call resetEnv() to reset environment and switch to another.
使用 resetEnv() 来重置运行环境,并切换到另一个。JVM listening on 127.0.0.1:57514
JavaObject id=o6

案例4:

手写数字识别

  • 使用Softmax 训练模型
  • 使用模型预测
  • 评估预测结果

准备数据

URL = "https://alink-release.oss-cn-beijing.aliyuncs.com/data-files/mnist_dense.csv"
SCHEMA_STR = "label bigint,bitmap string"
mnist_data = CsvSourceBatchOp() \.setFilePath(URL) \.setSchemaStr(SCHEMA_STR)\.setFieldDelimiter(";")
spliter = SplitBatchOp().setFraction(0.8)
train = spliter.linkFrom(mnist_data)
test = spliter.getSideOutput(0)

训练 + 预测 + 评估

softmax = Softmax().setVectorCol("bitmap").setLabelCol("label") \.setPredictionCol("pred").setPredictionDetailCol("detail") \.setEpsilon(0.0001).setMaxIter(200)
%timeit model = softmax.fit(train)
res = model.transform(test)evaluation = EvalMultiClassBatchOp().setLabelCol("label").setPredictionCol("pred")
metrics = evaluation.linkFrom(res).collectMetrics()

20.7 ms ± 3.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

打印结果

print("ConfusionMatrix:", metrics.getConfusionMatrix())
print("LabelArray:", metrics.getLabelArray())
print("LogLoss:", metrics.getLogLoss())
print("TotalSamples:", metrics.getTotalSamples())
print("ActualLabelProportion:", metrics.getActualLabelProportion())
print("ActualLabelFrequency:", metrics.getActualLabelFrequency())
print("Accuracy:", metrics.getAccuracy())
print("Kappa:", metrics.getKappa())

ConfusionMatrix: [[170, 3, 5, 0, 1, 7, 2, 2, 1, 0], [2, 154, 2, 1, 14, 3, 6, 9, 0, 2], [9, 3, 174, 0, 3, 3, 3, 3, 0, 0], [0, 0, 1, 162, 5, 4, 2, 6, 0, 7], [5, 9, 2, 5, 160, 1, 8, 1, 0, 0], [11, 4, 2, 0, 4, 187, 1, 2, 1, 1], [2, 5, 2, 2, 6, 1, 170, 4, 1, 0], [0, 2, 8, 4, 2, 4, 8, 180, 6, 1], [1, 3, 3, 1, 3, 1, 3, 3, 209, 0], [2, 2, 2, 0, 3, 1, 1, 2, 0, 179]]
LabelArray: [‘9’, ‘8’, ‘7’, ‘6’, ‘5’, ‘4’, ‘3’, ‘2’, ‘1’, ‘0’]
LogLoss: None
TotalSamples: 2000
ActualLabelProportion: [0.101, 0.0925, 0.1005, 0.0875, 0.1005, 0.106, 0.102, 0.106, 0.109, 0.095]
ActualLabelFrequency: [202, 185, 201, 175, 201, 212, 204, 212, 218, 190]
Accuracy: 0.8725
Kappa: 0.858283141946106

案例5:

准备环境

# set env
from pyalink.alink import *
resetEnv()
useLocalEnv(1, config=None)
Use one of the following commands to start using PyAlink:- useLocalEnv(parallelism, flinkHome=None, config=None)- useRemoteEnv(host, port, parallelism, flinkHome=None, localIp="localhost", config=None)
Call resetEnv() to reset environment and switch to another.JVM listening on 127.0.0.1:50568
MLEnv(benv=JavaObject id=o2, btenv=JavaObject id=o5, senv=JavaObject id=o3, stenv=JavaObject id=o6)

数据准备

## read data
URL = "https://alink-release.oss-cn-beijing.aliyuncs.com/data-files/review_rating_train.csv"
SCHEMA_STR = "review_id bigint, rating5 bigint, rating3 bigint, review_context string"
LABEL_COL = "rating5"
TEXT_COL = "review_context"
VECTOR_COL = "vec"
PRED_COL = "pred"
PRED_DETAIL_COL = "predDetail"
source = CsvSourceBatchOp() \.setFilePath(URL)\.setSchemaStr(SCHEMA_STR)\.setFieldDelimiter("_alink_")\.setQuoteChar(None)## Split data for train and test
trainData = SplitBatchOp().setFraction(0.9).linkFrom(source)
testData = trainData.getSideOutput(0)

特征工程

pipeline = (Pipeline().add(Segment().setSelectedCol(TEXT_COL)).add(StopWordsRemover().setSelectedCol(TEXT_COL)).add(DocHashCountVectorizer().setFeatureType("WORD_COUNT").setSelectedCol(TEXT_COL).setOutputCol(VECTOR_COL))
)

模型训练

## naiveBayes model
naiveBayes = (NaiveBayesTextClassifier().setVectorCol(VECTOR_COL).setLabelCol(LABEL_COL).setPredictionCol(PRED_COL).setPredictionDetailCol(PRED_DETAIL_COL)
)
%timeit model = pipeline.add(naiveBayes).fit(trainData)

3.39 s ± 152 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

数据预测评估

## evaluation
predict = model.transform(testData)
metrics = (EvalMultiClassBatchOp().setLabelCol(LABEL_COL).setPredictionDetailCol(PRED_DETAIL_COL).linkFrom(predict).collectMetrics()
)

打印评估结果

print("ConfusionMatrix:", metrics.getConfusionMatrix())
print("LabelArray:", metrics.getLabelArray())
print("LogLoss:", metrics.getLogLoss())
print("Accuracy:", metrics.getAccuracy())
print("Kappa:", metrics.getKappa())
print("MacroF1:", metrics.getMacroF1())
print("Label 1 Accuracy:", metrics.getAccuracy("1"))
print("Label 1 Kappa:", metrics.getKappa("1"))
print("Label 1 Precision:", metrics.getPrecision("1"))

ConfusionMatrix: [[4987, 327, 229, 204, 292], [28, 1223, 164, 147, 108], [1, 1, 269, 10, 11], [0, 0, 0, 10, 0], [0, 2, 1, 2, 83]]
LabelArray: [‘5’, ‘4’, ‘3’, ‘2’, ‘1’]
LogLoss: 2.330945631084851
Accuracy: 0.8114582047166317
Kappa: 0.6190950197563011
MacroF1: 0.5123859853163818
Label 1 Accuracy: 0.9486356340288925
Label 1 Kappa: 0.27179135595030096
Label 1 Precision: 0.9431818181818182

四、总结

​ 本文档介绍了Alink的由来以及与flink的关系,以及alink使用的五个基础案例。案例中的代码在jupyter lab中进行运行可以直接得到训练结果以及预测结果。Alink采用的python语言,但是其机器学习过程又与sparkml类似,可以简单理解为,Alink是对标sparkML的机器学习框架,并且支持在线数据流式处理能力。后续会持续进行该方向的研究。

Alink使用入门,基于flink的机器学习相关推荐

  1. 微博基于Flink的机器学习实践

    分享嘉宾:于茜 微博 高级算法工程师 编辑整理:王洪达 内容来源:Flink Forward 导读:微博作为国内比较主流的社交媒体平台,目前拥有2.22亿日活用户和5.16亿月活用户.如何为用户实时推 ...

  2. bilibili基于 Flink 的机器学习工作流平台在 b 站的应用

    简介:介绍 b 站的机器学习工作流平台 ultron 在 b 站多个机器学习场景上的应用. 分享嘉宾:张杨,B 站资深开发工程师 导读:整个机器学习的过程,从数据上报.到特征计算.到模型训练.再到线上 ...

  3. Flink从入门到精通100篇(二十二)-微博基于Flink的机器学习实战项目

    前言 微博作为国内比较主流的社交媒体平台,目前拥有2.22亿日活用户和5.16亿月活用户.如何为用户实时推荐优质内容,背后离不开微博的大规模机器学习平台.本文由微博机器学习研发中心高级算法工程师于茜老 ...

  4. Flink+Alink,当大数据遇见机器学习!

    以下内容节选自<Flink实战派>一书! --正文-- 大数据技术和人工智能(机器学习)的结合,使利用数据价值的技术有了新的突破. 在通常情况下,大数据技术与机器学习是互相促进.相依相存的 ...

  5. Alink 是阿里巴巴基于实时计算引擎 Flink 研发的新

    0x00 摘要 Alink 是阿里巴巴基于实时计算引擎 Flink 研发的新一代机器学习算法平台,是业界首个同时支持批式算法.流式算法的机器学习平台.二分类评估是对二分类算法的预测结果进行效果评估.本 ...

  6. 如何基于 Flink 生成在线机器学习的样本?

    #2020云栖大会#阿里云海量offer来啦!投简历.赢阿里云限量礼品及阿里云ACA认证免费考试资格!>>> 在线机器学习与离线相比,在模型更新的时效性,模型的迭代周期,业务实验效果 ...

  7. 基于Flink+Alink构建电商全端智能AI个性化实时推荐系统

    如今随着互联网发展,数据量不断增大,大数据已经成为各个互联网公司的重点方向,而推荐系统成为互联网必不可少的配置,一个好的推荐系统,能为企业带来了可观的用户流量和销售额,特别对于电商系统,好的推荐系统可 ...

  8. 基于Flink+Alink构建全端亿级实时用户画像系统

    用户画像,作为一种勾画目标用户.联系用户诉求与设计方向的有效工具,用户画像在各领域得到了广泛的应用. 用户画像最初是在电商领域得到应用的,在大数据时代背景下,用户信息充斥在网络中,将用户的每个具体信息 ...

  9. Flink从入门到精通100篇(四)-基于 Flink 和 Drools 的实时日志处理

    背景 日志系统接入的日志种类多.格式复杂多样,主流的有以下几种日志: filebeat采集到的文本日志,格式多样 winbeat采集到的操作系统日志 设备上报到logstash的syslog日志 接入 ...

最新文章

  1. 壕!甲骨文创始人 8000 万美元买豪宅后打算拆掉
  2. Go-Mega Tutorial 01 - Hello World
  3. 获得汉字拼音的首字母
  4. hibernate中many-to-one实例一
  5. 好用到爆的 Java 小技巧
  6. U3D非常诡异的【结构体引用】现象-个例
  7. QT的QTextStream类的使用
  8. 两个网段在同一个交换机_告诉你PC不能通过二层交换机实现跨网段通信的小秘密:你很难想到...
  9. 如何同时让多台服务器安装系统,如何同时安装多台服务器?
  10. pandas中DataFrame的学习笔记~
  11. nginx命令和配置
  12. PIC16F877A开发板 数码管计数器实验
  13. (转)《精通比特币》原码分析: pow机制
  14. Java LP1_Java Performance 总结(1. Class Loader)
  15. python模拟键盘操作_Python 模拟键盘鼠标操作详细教程
  16. 视频教程-华为HCIA网络基础-网络技术
  17. ThreadPoolExecutor源码学习以及观雄哥大佬博客有感
  18. 微信小程序中生成二维码工具以及扫一扫
  19. Java语言-27:Map接口
  20. Mac OS系统进不去,重装也不行,只能抹盘安装,Espionage的加密文件如何救出?

热门文章

  1. js逻辑运算ab 与 a||b作为返回值时的规律
  2. 初学卡尔曼滤波(KF)、扩展卡尔曼滤波(EKF)以及无迹卡尔曼滤波(UKF)
  3. sklearn.neighbors.KNeighborsClassifier
  4. 将讲解如何利用WinNTSetup安装win10
  5. 数据实时同步或抽取上收的技术分析(转)
  6. WordNet 简介
  7. 凡人无法打开的文件2
  8. 宝付旅行记三(宁夏银川)
  9. 计算机网络技术的自我鉴定怎么写,计算机网络实习自我鉴定范文
  10. qmainwindow 背景充电_Qt学习笔记,QWidget和QMainWindow新认识