1.概述

转载:Flink 1.12.2 源码浅析 : yarn-per-job模式解析 [三]

上一章:【flink】Flink 1.12.2 源码浅析 : yarn-per-job模式解析 yarn 提交过程解析

整体流程图

二 .代码分析


上节我们看到 yarnClient.submitApplication(appContext); 将任务提交到yarn集群.

在AppMaster中执行的程序的代码如下 :

$JAVA_HOME/bin/java
-Xmx1073741824
-Xms1073741824
-XX:MaxMetaspaceSize=268435456
-Dlog.file="<LOG_DIR>/jobmanager.log"
-Dlog4j.configuration=file:log4j.properties
-Dlog4j.configurationFile=file:log4j.properties org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint -D jobmanager.memory.off-heap.size=134217728b
-D jobmanager.memory.jvm-overhead.min=201326592b
-D jobmanager.memory.jvm-metaspace.size=268435456b
-D jobmanager.memory.heap.size=1073741824b
-D jobmanager.memory.jvm-overhead.max=201326592b 1> <LOG_DIR>/jobmanager.out 2> <LOG_DIR>/jobmanager.err

其中入口类为 : org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

所以,我们就从这个类开始看…

2.1. YarnJobClusterEntrypoint#main

AM的程序入口就是 YarnJobClusterEntrypoint#main 方法.

  1. 加载配置
  2. 构建YarnJobClusterEntrypoint
  3. ClusterEntrypoint.runClusterEntrypoint(yarnJobClusterEntrypoint);
   // ------------------------------------------------------------------------//  The executable entry point for the Yarn Application Master Process//  for a single Flink job.// ------------------------------------------------------------------------public static void main(String[] args) {// startup checks and loggingEnvironmentInformation.logEnvironmentInfo(LOG, YarnJobClusterEntrypoint.class.getSimpleName(), args);SignalHandler.register(LOG);JvmShutdownSafeguard.installAsShutdownHook(LOG);Map<String, String> env = System.getenv();final String workingDirectory = env.get(ApplicationConstants.Environment.PWD.key());Preconditions.checkArgument(workingDirectory != null,"Working directory variable (%s) not set",ApplicationConstants.Environment.PWD.key());try {YarnEntrypointUtils.logYarnEnvironmentInformation(env, LOG);} catch (IOException e) {LOG.warn("Could not log YARN environment information.", e);}final Configuration dynamicParameters =ClusterEntrypointUtils.parseParametersOrExit(args,new DynamicParametersConfigurationParserFactory(),YarnJobClusterEntrypoint.class);// 1.构建配置final Configuration configuration =YarnEntrypointUtils.loadConfiguration(workingDirectory, dynamicParameters, env);// 2.构建YarnJobClusterEntrypointYarnJobClusterEntrypoint yarnJobClusterEntrypoint = new YarnJobClusterEntrypoint(configuration);// 3.启动ClusterEntrypoint.runClusterEntrypoint(yarnJobClusterEntrypoint);}

2.2. ClusterEntrypoint#runCluster

ClusterEntrypoint.runClusterEntrypoint(yarnJobClusterEntrypoint); 之后会进行多次跳转, 最终调用ClusterEntrypoint#runCluster方法启动集群.

在这个方法里面通过createDispatcherResourceManagerComponentFactory$create 创建JobManager内的组件 : Dispatcher / ResourceManager / JobMaster

// 执行
ClusterEntrypoint#runClusterEntrypoint--> ClusterEntrypoint#startCluster();--> ClusterEntrypoint#runCluster(configuration, pluginManager);
private void runCluster(Configuration configuration, PluginManager pluginManager)throws Exception {synchronized (lock) {// 初始化插件...// 构建各种服务, 比如心跳,RPC ....initializeServices(configuration, pluginManager);// 在配置中写入host和port 信息// write host information into configurationconfiguration.setString(JobManagerOptions.ADDRESS, commonRpcService.getAddress());configuration.setInteger(JobManagerOptions.PORT, commonRpcService.getPort());// 创建JobManager内的组件 : Dispatcher / ResourceManager / JobMaster// 构建 DispatcherResourceManagerComponentFactoryfinal DispatcherResourceManagerComponentFactorydispatcherResourceManagerComponentFactory =createDispatcherResourceManagerComponentFactory(configuration);// 构建集群 组件 DispatcherResourceManagerComponentclusterComponent =dispatcherResourceManagerComponentFactory.create(configuration,ioExecutor,commonRpcService,haServices,blobServer,heartbeatServices,metricRegistry,archivedExecutionGraphStore,new RpcMetricQueryServiceRetriever(metricRegistry.getMetricQueryServiceRpcService()),this);// 关闭操作...clusterComponent.getShutDownFuture().whenComplete((ApplicationStatus applicationStatus, Throwable throwable) -> {if (throwable != null) {shutDownAsync(ApplicationStatus.UNKNOWN,ExceptionUtils.stringifyException(throwable),false);} else {// This is the general shutdown path. If a separate more// specific shutdown was// already triggered, this will do nothingshutDownAsync(applicationStatus, null, true);}});}}

2.3. DefaultDispatcherResourceManagerComponentFactory#create

核心的服务构建&启动都在这里,
包括ResourceManager , Dispatcher , JobManager 以及WEB UI , History Server 相关服务 .

@Overridepublic DispatcherResourceManagerComponent create(Configuration configuration,Executor ioExecutor,RpcService rpcService,HighAvailabilityServices highAvailabilityServices,BlobServer blobServer,HeartbeatServices heartbeatServices,MetricRegistry metricRegistry,ArchivedExecutionGraphStore archivedExecutionGraphStore,MetricQueryServiceRetriever metricQueryServiceRetriever,FatalErrorHandler fatalErrorHandler)throws Exception {LeaderRetrievalService dispatcherLeaderRetrievalService = null;LeaderRetrievalService resourceManagerRetrievalService = null;WebMonitorEndpoint<?> webMonitorEndpoint = null;ResourceManager<?> resourceManager = null;DispatcherRunner dispatcherRunner = null;try {// Dispatcher 高可用相关dispatcherLeaderRetrievalService =highAvailabilityServices.getDispatcherLeaderRetriever();// ResourceManager 高可用相关resourceManagerRetrievalService =highAvailabilityServices.getResourceManagerLeaderRetriever();// Dispatcher 网关相关final LeaderGatewayRetriever<DispatcherGateway> dispatcherGatewayRetriever =new RpcGatewayRetriever<>(rpcService,DispatcherGateway.class,DispatcherId::fromUuid,new ExponentialBackoffRetryStrategy(12, Duration.ofMillis(10), Duration.ofMillis(50)));// ResourceManager 网关相关final LeaderGatewayRetriever<ResourceManagerGateway> resourceManagerGatewayRetriever =new RpcGatewayRetriever<>(rpcService,ResourceManagerGateway.class,ResourceManagerId::fromUuid,new ExponentialBackoffRetryStrategy(12, Duration.ofMillis(10), Duration.ofMillis(50)));// 构建  Executorfinal ScheduledExecutorService executor =WebMonitorEndpoint.createExecutorService(configuration.getInteger(RestOptions.SERVER_NUM_THREADS),configuration.getInteger(RestOptions.SERVER_THREAD_PRIORITY),"DispatcherRestEndpoint");// 10000Lfinal long updateInterval =configuration.getLong(MetricOptions.METRIC_FETCHER_UPDATE_INTERVAL);final MetricFetcher metricFetcher =updateInterval == 0? VoidMetricFetcher.INSTANCE: MetricFetcherImpl.fromConfiguration(configuration,metricQueryServiceRetriever,dispatcherGatewayRetriever,executor);// WEB UI 相关服务webMonitorEndpoint =restEndpointFactory.createRestEndpoint(configuration,dispatcherGatewayRetriever,resourceManagerGatewayRetriever,blobServer,executor,metricFetcher,highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),fatalErrorHandler);log.debug("Starting Dispatcher REST endpoint.");webMonitorEndpoint.start();// 获取主机名称final String hostname = RpcUtils.getHostname(rpcService);// 获取resourceManagerresourceManager =resourceManagerFactory.createResourceManager(configuration,ResourceID.generate(),rpcService,highAvailabilityServices,heartbeatServices,fatalErrorHandler,new ClusterInformation(hostname, blobServer.getPort()),webMonitorEndpoint.getRestBaseUrl(),metricRegistry,hostname,ioExecutor);// 获取 history server 相关final HistoryServerArchivist historyServerArchivist =HistoryServerArchivist.createHistoryServerArchivist(configuration, webMonitorEndpoint, ioExecutor);final PartialDispatcherServices partialDispatcherServices =new PartialDispatcherServices(configuration,highAvailabilityServices,resourceManagerGatewayRetriever,blobServer,heartbeatServices,() ->MetricUtils.instantiateJobManagerMetricGroup(metricRegistry, hostname),archivedExecutionGraphStore,fatalErrorHandler,historyServerArchivist,metricRegistry.getMetricQueryServiceGatewayRpcAddress(),ioExecutor);// 创建/启动   Dispatcher : dispatcher会创建和启动JobManagerlog.debug("Starting Dispatcher.");dispatcherRunner =dispatcherRunnerFactory.createDispatcherRunner(highAvailabilityServices.getDispatcherLeaderElectionService(),fatalErrorHandler,new HaServicesJobGraphStoreFactory(highAvailabilityServices),ioExecutor,rpcService,partialDispatcherServices);// 启动 ResourceManagerlog.debug("Starting ResourceManager.");resourceManager.start();resourceManagerRetrievalService.start(resourceManagerGatewayRetriever);dispatcherLeaderRetrievalService.start(dispatcherGatewayRetriever);return new DispatcherResourceManagerComponent(dispatcherRunner,DefaultResourceManagerService.createFor(resourceManager),dispatcherLeaderRetrievalService,resourceManagerRetrievalService,webMonitorEndpoint,fatalErrorHandler);} catch (Exception exception) {// clean up all started componentsif (dispatcherLeaderRetrievalService != null) {try {dispatcherLeaderRetrievalService.stop();} catch (Exception e) {exception = ExceptionUtils.firstOrSuppressed(e, exception);}}if (resourceManagerRetrievalService != null) {try {resourceManagerRetrievalService.stop();} catch (Exception e) {exception = ExceptionUtils.firstOrSuppressed(e, exception);}}final Collection<CompletableFuture<Void>> terminationFutures = new ArrayList<>(3);if (webMonitorEndpoint != null) {terminationFutures.add(webMonitorEndpoint.closeAsync());}if (resourceManager != null) {terminationFutures.add(resourceManager.closeAsync());}if (dispatcherRunner != null) {terminationFutures.add(dispatcherRunner.closeAsync());}final FutureUtils.ConjunctFuture<Void> terminationFuture =FutureUtils.completeAll(terminationFutures);try {terminationFuture.get();} catch (Exception e) {exception = ExceptionUtils.firstOrSuppressed(e, exception);}throw new FlinkException("Could not create the DispatcherResourceManagerComponent.", exception);}}

三 . Dispatcher 相关

Dispatcher 主要有两个用途.

  1. 接收用户代码
  2. 创建/启动JobManager
  3. 具体的实现类是DispatcherRunner .
dispatcherRunnerFactory.createDispatcherRunner(highAvailabilityServices.getDispatcherLeaderElectionService(),fatalErrorHandler,new HaServicesJobGraphStoreFactory(highAvailabilityServices),ioExecutor,rpcService,partialDispatcherServices);

3.1. 构建

调用的入口是DefaultDispatcherResourceManagerComponentFactory#create.
由DispatcherRunner 实现.
DispatcherRunner 的构建是通过DefaultDispatcherRunnerFactory#createDispatcherRunner 作为入口创建的.

实现类是 : DefaultDispatcherRunner.create

DefaultDispatcherResourceManagerComponentFactory#create

3.2. 启动

启动的入口是DefaultDispatcherRunnerFactory#createDispatcherRunner 最终不断的跳转到了
DefaultDispatcherRunner#startNewDispatcherLeaderProcess方法
JobDispatcherLeaderProcess#onStart

    private void startNewDispatcherLeaderProcess(UUID leaderSessionID) {// 停止之前的DispatcherLeader 进程stopDispatcherLeaderProcess();// 构建新的 DispatcherLeader 进程dispatcherLeaderProcess = createNewDispatcherLeaderProcess(leaderSessionID);final DispatcherLeaderProcess newDispatcherLeaderProcess = dispatcherLeaderProcess;// 启动FutureUtils.assertNoException(previousDispatcherLeaderProcessTerminationFuture.thenRun(newDispatcherLeaderProcess::start));}

调用JobDispatcherLeaderProcess#onStart 方法…

    @Overrideprotected void onStart() {// createfinal DispatcherGatewayService dispatcherService =dispatcherGatewayServiceFactory.create(DispatcherId.fromUuid(getLeaderSessionId()),Collections.singleton(jobGraph),ThrowingJobGraphWriter.INSTANCE);completeDispatcherSetup(dispatcherService);}

最终由 DefaultDispatcherGatewayServiceFactory#create 创建/启动Dispatcher

public AbstractDispatcherLeaderProcess.DispatcherGatewayService create(DispatcherId fencingToken,Collection<JobGraph> recoveredJobs,JobGraphWriter jobGraphWriter) {// 定义Dispatcherfinal Dispatcher dispatcher;try {//创建Dispatcherdispatcher =dispatcherFactory.createDispatcher(rpcService,fencingToken,recoveredJobs,(dispatcherGateway, scheduledExecutor, errorHandler) ->new NoOpDispatcherBootstrap(),PartialDispatcherServicesWithJobGraphStore.from(partialDispatcherServices, jobGraphWriter));} catch (Exception e) {throw new FlinkRuntimeException("Could not create the Dispatcher rpc endpoint.", e);}// 启动 Dispatcher :  Dispatcher#OnStart//      1. 接收用户作业//      2. 启动JobMasterdispatcher.start();return DefaultDispatcherGatewayService.from(dispatcher);}

四 .JobManager 相关

4.1. 启动

JobManager是由Dispatcher的onStart方法作为入口,进行启动的.
执行的方法为 : Dispatcher#startRecoveredJobs

    private void startRecoveredJobs() {// 处理需要恢复的Jobfor (JobGraph recoveredJob : recoveredJobs) {// 处理恢复的JobrunRecoveredJob(recoveredJob);}recoveredJobs.clear();}

4.2. 执行

执行跳转逻辑:

直接看

Dispatcher#runRecoveredJob-->  Dispatcher#runJob(recoveredJob, ExecutionType.RECOVERY);-->  Dispatcher#rrunJob(JobGraph jobGraph, ExecutionType executionType)
private void runJob(JobGraph jobGraph, ExecutionType executionType) {Preconditions.checkState(!runningJobs.containsKey(jobGraph.getJobID()));long initializationTimestamp = System.currentTimeMillis();// 构建 JobManagerRunnerCompletableFuture<JobManagerRunner> jobManagerRunnerFuture =  createJobManagerRunner(jobGraph, initializationTimestamp);DispatcherJob dispatcherJob =DispatcherJob.createFor(jobManagerRunnerFuture,jobGraph.getJobID(),jobGraph.getName(),initializationTimestamp);// 将Job加入队列runningJobs.put(jobGraph.getJobID(), dispatcherJob);final JobID jobId = jobGraph.getJobID();final CompletableFuture<CleanupJobState> cleanupJobStateFuture =dispatcherJob.getResultFuture().handleAsync((dispatcherJobResult, throwable) -> {Preconditions.checkState(runningJobs.get(jobId) == dispatcherJob,"The job entry in runningJobs must be bound to the lifetime of the DispatcherJob.");if (dispatcherJobResult != null) {return handleDispatcherJobResult(jobId, dispatcherJobResult, executionType);} else {return dispatcherJobFailed(jobId, throwable);}},getMainThreadExecutor());final CompletableFuture<Void> jobTerminationFuture =cleanupJobStateFuture.thenApply(cleanupJobState -> removeJob(jobId, cleanupJobState)).thenCompose(Function.identity());FutureUtils.assertNoException(jobTerminationFuture);registerDispatcherJobTerminationFuture(jobId, jobTerminationFuture);}

生成JobManagerRunner 并启动

    CompletableFuture<JobManagerRunner> createJobManagerRunner(JobGraph jobGraph, long initializationTimestamp) {final RpcService rpcService = getRpcService();return CompletableFuture.supplyAsync(() -> {try {// 创建JobManagerRunnerJobManagerRunner runner =jobManagerRunnerFactory.createJobManagerRunner(jobGraph,configuration,rpcService,highAvailabilityServices,heartbeatServices,jobManagerSharedServices,new DefaultJobManagerJobMetricGroupFactory(jobManagerMetricGroup),fatalErrorHandler,initializationTimestamp);// 启动runner.start();return runner;} catch (Exception e) {throw new CompletionException(new JobInitializationException(jobGraph.getJobID(),"Could not instantiate JobManager.",e));}},ioExecutor); // do not use main thread executor. Otherwise, Dispatcher is blocked on// JobManager creation}

五 .ResourceManager 相关

ResourceManager负责向Yarn申请资源相关的操作.

5.1. 构建

调用的入口是DefaultDispatcherResourceManagerComponentFactory#create.

 // 获取resourceManagerresourceManager =resourceManagerFactory.createResourceManager(configuration,ResourceID.generate(),rpcService,highAvailabilityServices,heartbeatServices,fatalErrorHandler,new ClusterInformation(hostname, blobServer.getPort()),webMonitorEndpoint.getRestBaseUrl(),metricRegistry,hostname,ioExecutor);
ActiveResourceManagerFactory#createResourceManager--> YarnResourceManagerFactory#createResourceManagerDriver
public YarnResourceManagerDriver(Configuration flinkConfig,YarnResourceManagerDriverConfiguration configuration,YarnResourceManagerClientFactory yarnResourceManagerClientFactory,YarnNodeManagerClientFactory yarnNodeManagerClientFactory) {super(flinkConfig, GlobalConfiguration.loadConfiguration(configuration.getCurrentDir()));this.yarnConfig = new YarnConfiguration();this.requestResourceFutures = new HashMap<>();this.configuration = configuration;final int yarnHeartbeatIntervalMS =flinkConfig.getInteger(YarnConfigOptions.HEARTBEAT_DELAY_SECONDS) * 1000;final long yarnExpiryIntervalMS =yarnConfig.getLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS,YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS);if (yarnHeartbeatIntervalMS >= yarnExpiryIntervalMS) {log.warn("The heartbeat interval of the Flink Application master ({}) is greater "+ "than YARN's expiry interval ({}). The application is likely to be killed by YARN.",yarnHeartbeatIntervalMS,yarnExpiryIntervalMS);}yarnHeartbeatIntervalMillis = yarnHeartbeatIntervalMS;containerRequestHeartbeatIntervalMillis =flinkConfig.getInteger(YarnConfigOptions.CONTAINER_REQUEST_HEARTBEAT_INTERVAL_MILLISECONDS);this.registerApplicationMasterResponseReflector =new RegisterApplicationMasterResponseReflector(log);this.yarnResourceManagerClientFactory = yarnResourceManagerClientFactory;this.yarnNodeManagerClientFactory = yarnNodeManagerClientFactory;}

5.2. 启动

调用的入口是DefaultDispatcherResourceManagerComponentFactory#create.

 resourceManager.start();

ResourceManager#OnStart 方法

    // ------------------------------------------------------------------------//  RPC lifecycle methods// ------------------------------------------------------------------------@Overridepublic final void onStart() throws Exception {try {// 启动 ResourceManager ServicestartResourceManagerServices();} catch (Throwable t) {final ResourceManagerException exception =new ResourceManagerException(String.format("Could not start the ResourceManager %s", getAddress()),t);onFatalError(exception);throw exception;}}
    // 启动 Resource Managerprivate void startResourceManagerServices() throws Exception {try {leaderElectionService =highAvailabilityServices.getResourceManagerLeaderElectionService();// 执行初始化操作// 构建Yarn的 RM和NM的客户端 并进行初始化&启动initialize();// 通过选举服务,启动RMleaderElectionService.start(this);jobLeaderIdService.start(new JobLeaderIdActionsImpl());// 注册TaskExecutor MetricsregisterTaskExecutorMetrics();} catch (Exception e) {handleStartResourceManagerServicesException(e);}}

【flink】Flink 1.12.2 源码浅析 : yarn-per-job模式解析 JobMasger启动 YarnJobClusterEntrypoint相关推荐

  1. 【flink】Flink 1.12.2 源码浅析 : yarn-per-job模式解析 TaskMasger 启动

    1.概述 转载:Flink 1.12.2 源码浅析 : yarn-per-job模式解析 [四] 上一篇: [flink]Flink 1.12.2 源码浅析 : yarn-per-job模式解析 Jo ...

  2. 【flink】Flink 1.12.2 源码浅析 : Task数据输入

    1.概述 转载:Flink 1.12.2 源码浅析 : Task数据输入 在 Task 中,InputGate 是对输入的封装,InputGate 是和 JobGraph 中 JobEdge 一一对应 ...

  3. 【flink】Flink 1.12.2 源码浅析 :Task数据输出

    1.概述 转载:Flink 1.12.2 源码浅析 :Task数据输出 Stream的计算模型采用的是PUSH模式, 上游主动向下游推送数据, 上下游之间采用生产者-消费者模式, 下游收到数据触发计算 ...

  4. 【flink】Flink 1.12.2 源码浅析 : StreamTask 浅析

    1.概述 转载:Flink 1.12.2 源码浅析 : StreamTask 浅析 在Task类的doRun方法中, 首先会构建一个运行环境变量RuntimeEnvironment . 然后会调用lo ...

  5. 【flink】Flink 1.12.2 源码浅析 : Task 浅析

    1.概述 转载:Flink 1.12.2 源码浅析 : Task 浅析 Task 表示TaskManager上并行 subtask 的一次执行. Task封装了一个Flink operator(也可能 ...

  6. 【Flink】Flink 1.12.2 源码浅析 : TaskExecutor

    1.概述 转载:Flink 1.12.2 源码浅析 : TaskExecutor TaskExecutor 是TaskManger的具体实现. 二 .TaskExecutorGateway TaskE ...

  7. 【flink】Flink 1.12.2 源码浅析 : yarn-per-job模式解析 yarn 提交过程解析

    1.概述 转载:Flink 1.12.2 源码浅析 : yarn-per-job模式解析 [二] 请大家看原文去. 接上文Flink 1.12.2 源码分析 : yarn-per-job模式浅析 [一 ...

  8. 【flink】Flink 1.12.2 源码浅析 : yarn-per-job模式解析 从脚本到主类

    1.概述 转载:Flink 1.12.2 源码浅析 : yarn-per-job模式解析 [一] 可以去看原文.这里是补充专栏.请看原文 2. 前言 主要针对yarn-per-job模式进行代码分析. ...

  9. Flink 1.12.2 源码浅析 : yarn-per-job模式解析 [二]

    . 一 .前言 二 .启动解析 2.1. StreamExecutionEnvironment#execute 2.2. StreamExecutionEnvironment#executeAsync ...

最新文章

  1. converter 冷迁
  2. .NET 权限笔记-Attribute+Reflect+Remoting
  3. [maven] settings 文件 本地maven仓库
  4. CentOS 7 更新软件源和系统
  5. Java设计模式(二) -- 单例模式
  6. python\java\c\解释性语言\编译性语言 程序执行过程
  7. 1QPushButton的使用,QLineEdit的使用,设置组件位置,布局(QHBoxLayout,QGridLayout)
  8. JS通用窗口拖动函数
  9. 使用 Arduino 和 LM35 温度传感器监测温度
  10. AI 是中性的技术,如何用它更好地为人类服务
  11. linux leach仿真数据传输图性能,基于OMNeT-+-+的Leach协议的仿真研究.pdf
  12. 表格用计算机做成横版的WPS,WPS表格怎么将表格横过来图文教程
  13. 超级计算机画函数软件,致豪函数演示画板(函数图像绘制软件)V1.1.100 免费版
  14. php项目权限系统设计
  15. 迅雷如何添加html文件夹,迅雷7上我的收藏怎么找
  16. 通俗易懂讲PID,附参数调试口诀
  17. 什么是网站跳出率?一招教你如何处理高跳出率?
  18. python小车行走_[PYTHON系列教程]→控制小车
  19. android 基础知识-LOG和版本解释
  20. 什么是基线评估(Baseline Evaluation)

热门文章

  1. 飞书正式发布5.0版 推出飞书人事、合同、审批等多款新产品
  2. 腾讯Q3财报看点:净利近10年来首次下滑 为硬科技持续“烧钱”
  3. 网红品牌,都是“营销狗”?
  4. 疑似小米平板5通过3C认证:搭载8720mah双电芯方案
  5. 路痴福音!高德地图上线真AR步行导航,可实景指引
  6. 世界首富贝索斯退休?辞任CEO,转任董事会主席
  7. 腾讯云与阿里云竞争激烈:销售团队积极争取每一笔交易
  8. iPhone XR再降价:64GB到手最低仅需4149元
  9. 每晚有1700万人逛淘宝但什么都不买,马云:我们仍可以靠他们赚钱
  10. VIPKID义务援手韦博英语,承接其“嗨英语”部分学员