前几天在做开发的时候无意中得到一段idea中spark运行hive语句得到的执行树:

但是很困惑的是之后就没有再出现过了

如果有大神路过看到请指教该怎么打开这个查看执行树的显示设置。

这个执行树真的非常的详细的啊,感觉对理解sparksql的解析和优化会很有好处的啊


代码,执行树和个人理解如下:

spark hivesql执行树(部分)
执行代码:
create table if not exists tmp_t_rdvs_exp_waybill_detail
asselectt.waybill_no,                           t.operation_time,                       substr(VEHICLE_NO, 2) DELIVERY_CODE,    t.DELIVERY_NAME,                        t.operate_org_name,                     t1.CUSTOMER_PICKUP_ORG_CODE,            t1.customer_pickup_org_name,t4.CAREER_DEPT,                         t4.CAREER_DEPT_NAME,t4.BIG_AREA,                            t4.BIG_AREA_NAME,t4.SMALL_AREA,                         t4.SMALL_AREA_NAME,t5.STOCK_DAY,$nowdate   INSERT_TIME                  from (select waybill_no,VEHICLE_NO,DELIVERY_NAME,                   operate_org_name,               operation_time,row_number() over(partition by waybill_no order by operation_time desc) rnfrom ecs.t_dlv_trajectory_record where operate_type_name in ('TO_DELIVERING', 'DELIVERING')    and operation_time >= to_date('${start_date}')                and operation_time < to_date('${end_date}')) tjoin ECS.T_TAK_WAYBILL_MGMTINFO t1 on t.waybill_no = t1.waybill_noand t.operate_org_name = t1.customer_pickup_org_name and t1.ACTIVE = 'Y'left join ECS.T_DLV_WAYBILL_SIGNOFF_RESULT d on t.waybill_no =d.waybill_noand d.data_status = 'Y'join ECS.T_BSE_ORG_SALES_DEPARTMENT t3 on t1.CUSTOMER_PICKUP_ORG_CODE =t3.CODEand t3.active = 'Y'and t3.IS_LEAGUE_SALEDEPT = 'N'left join RDVS.T_RDVS_COURIER_INFO t4on t4.BUSINESS_EXPRESS = t1.CUSTOMER_PICKUP_ORG_CODE--left join rdvs.t_rdvs_exp_stock_detail t5--       on t5.waybill_no = t.waybill_nowhere t.rn = 1and d.waybill_no is null
》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》
/*
个人理解:  1.表的扫描过程是从from  以此往下的join/left join2.where 条件是在最后执行的      部分where中的条件(例如:rn = 1)也会提前执行3.关联的表,在表扫描之后,添加筛选条件之后会根据关联条件进行hash分区,以便更高效的进行数据关联。4.表t4并没有根据关联条件,进行hash分区不太清楚???
*/+- Project [operate_org_name#9,customer_pickup_org_name#52,BIG_AREA_NAME#253,CAREER_DEPT_NAME#255,waybill_no#6,CUSTOMER_PICKUP_ORG_CODE#51,CAREER_DEPT#254,DELIVERY_NAME#17,BIG_AREA#252,operation_time#12,SMALL_AREA_NAME#251,VEHICLE_NO#16,SMALL_AREA#250]+- BroadcastHashOuterJoin [CUSTOMER_PICKUP_ORG_CODE#51], [BUSINESS_EXPRESS#248], LeftOuter, None:- Project [operate_org_name#9,customer_pickup_org_name#52,waybill_no#6,CUSTOMER_PICKUP_ORG_CODE#51,DELIVERY_NAME#17,operation_time#12,VEHICLE_NO#16]:  +- SortMergeJoin [CUSTOMER_PICKUP_ORG_CODE#51], [CODE#175]:     :- Sort [CUSTOMER_PICKUP_ORG_CODE#51 ASC], false, 0:     :  +- TungstenExchange hashpartitioning(CUSTOMER_PICKUP_ORG_CODE#51,200), None:     :     +- Project [operate_org_name#9,customer_pickup_org_name#52,waybill_no#6,CUSTOMER_PICKUP_ORG_CODE#51,DELIVERY_NAME#17,operation_time#12,VEHICLE_NO#16]:     :        +- Filter isnull(waybill_no#139):     :           +- SortMergeOuterJoin [waybill_no#6], [waybill_no#139], LeftOuter, None:     :              :- Sort [waybill_no#6 ASC], false, 0:     :              :  +- TungstenExchange hashpartitioning(waybill_no#6,200), None:     :              :     +- SortMergeJoin [waybill_no#6,operate_org_name#9], [waybill_no#22,customer_pickup_org_name#52]:     :              :        :- Sort [waybill_no#6 ASC,operate_org_name#9 ASC], false, 0:     :              :        :  +- TungstenExchange hashpartitioning(waybill_no#6,operate_org_name#9,200), None:     :              :        :     +- ConvertToUnsafe:     :              :        :        +- Filter (rn#0 = 1):     :              :        :           +- Window [waybill_no#6,VEHICLE_NO#16,DELIVERY_NAME#17,operate_org_name#9,operation_time#12], [HiveWindowFunction#org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRowNumber() windowspecdefinition(waybill_no#6,operation_time#12 DESC,ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS rn#0], [waybill_no#6], [operation_time#12 DESC]:     :              :        :              +- Sort [waybill_no#6 ASC,operation_time#12 DESC], false, 0:     :              :        :                 +- TungstenExchange hashpartitioning(waybill_no#6,200), None:     :              :        :                    +- Project [waybill_no#6,VEHICLE_NO#16,DELIVERY_NAME#17,operate_org_name#9,operation_time#12]:     :              :        :                       +- Filter ((operate_type_name#8 IN (TO_DELIVERING,DELIVERING) && (cast(operation_time#12 as string) >= 2017-05-12)) && (cast(operation_time#12 as string) < 2017-05-13)):     :              :        :                          +- HiveTableScan [operate_org_name#9,waybill_no#6,DELIVERY_NAME#17,operate_type_name#8,VEHICLE_NO#16,operation_time#12], MetastoreRelation ecs, t_dlv_trajectory_record, None:     :              :        +- Sort [waybill_no#22 ASC,customer_pickup_org_name#52 ASC], false, 0:     :              :           +- TungstenExchange hashpartitioning(waybill_no#22,customer_pickup_org_name#52,200), None:     :              :              +- ConvertToUnsafe:     :              :                 +- Filter (ACTIVE#106 = Y):     :              :                    +- HiveTableScan [id#21,waybill_no#22,shipper_customer_code#23,shipper_customer_name#24,shipper_customer_mobilephone#25,shipper_customer_phone#26,consignee_customer_code#27,consignee_customer_name#28,consignee_customer_mobilephone#29,consignee_customer_phone#30,bill_type#31,waybill_goodsstate#32,waybill_state#33,order_channel#34,create_order_type#35,waybill_type#36,order_no#37,fatherchild#38,goods_qty#39,goods_volume#40,goods_weight#41,shipper_contract_dept_code#42,shipper_contract_dept_name#43,consignee_contract_dept_code#44,consignee_contract_dept_name#45,receive_method#46,dest_transferstation_code#47,dest_transferstation_name#48,departure_dept_no#49,departure_dept_name#50,customer_pickup_org_code#51,customer_pickup_org_name#52,receive_org_code#53,receive_org_name#54,billing_org_code#55,billing_org_name#56,fee_charge_dept_code#57,fee_charge_dept_name#58,flight_type#59,return_bill_type#60,merge_mode#61,flight_shift#62,outer_notes#63,inner_notes#64,preconfig_route_code#65,preconfig_route_name#66,transportation_remark#67,prepaid_confidential#68,sendoff_unified_settlement#69,receice_unified_settlement#70,account_name#71,account_bank#72,account_code#73,internal_delivery_type#74,internal_staff_code#75,bill_time#76,handle_type#77,start_reminder_org_code#78,start_reminder_org_name#79,start_contract_org_code#80,arrive_contract_org_code#81,supply_code_status#82,last_load_org_code#83,pickup_emp_no#84,pickup_emp_name#85,delivery_emp_no#86,delivery_emp_name#87,reasons_return#88,invoice_mark#89,create_no#90,update_no#91,create_name#92,update_name#93,create_org_code#94,update_org_code#95,create_org_name#96,update_org_name#97,create_time#98,update_time#99,data_status#100,operation_device#101,operation_device_code#102,version#103,active_begin_time#104,active_end_time#105,active#106,load_org_name#107,pre_departure_time#108,pre_arrive_time#109,pre_customer_pickup_time#110,simple_name#111,simpl_address#112,telep_hone#113,channel_decimal#114,isebig_ewaybill#115,cust_group#116,product_code#117,deliver_country_code#118,receive_country_code#119,deliver_province_code#120,receive_province_code#121,receive_city_code#122,deliver_city_code#123,deliver_area_code#124,receive_area_code#125,deliver_county_code#126,receive_county_code#127,load_org_code#128,post_recorded_time#129,goods_type#130,cn_wd#131,service_flag#132,schedule_type#133,valuation_time#134,ptp_update_time#135,main_customer_code#136,market_customer_code#137,opt_mon#19,gopt_day#20], MetastoreRelation ecs, t_tak_waybill_mgmtinfo, Some(t1):     :              +- Sort [waybill_no#139 ASC], false, 0:     :                 +- TungstenExchange hashpartitioning(waybill_no#139,200), None:     :                    +- ConvertToUnsafe:     :                       +- Filter (data_status#163 = Y):     :                          +- HiveTableScan [id#138,waybill_no#139,sign_state#140,settle_status#141,signer_name#142,signer_code#143,signer_type#144,signoff_qty#145,signoff_time#146,signoff_situation#147,memo#148,package_situation#149,goods_qty_situation#150,dropoff_emp_no#151,dropoff_emp_name#152,create_no#153,update_no#154,create_name#155,update_name#156,create_org_code#157,update_org_code#158,create_org_name#159,update_org_name#160,create_time#161,update_time#162,data_status#163,operation_device#164,operation_device_code#165,operation_code#166,operation_assist_code#167,control_status#168,operator_no#169,operator_name#170,operation_org_code#171,operation_org_name#172,operation_time#173], MetastoreRelation ecs, t_dlv_waybill_signoff_result, Some(d):     +- Sort [CODE#175 ASC], false, 0:        +- TungstenExchange hashpartitioning(CODE#175,200), None:           +- Project [CODE#175]:              +- Filter ((active#199 = Y) && (IS_LEAGUE_SALEDEPT#235 = N)):                 +- HiveTableScan [CODE#175,active#199,IS_LEAGUE_SALEDEPT#235], MetastoreRelation ecs, t_bse_org_sales_department, Some(t3)+- ConvertToUnsafe+- HiveTableScan [BIG_AREA_NAME#253,CAREER_DEPT_NAME#255,CAREER_DEPT#254,BUSINESS_EXPRESS#248,BIG_AREA#252,SMALL_AREA_NAME#251,SMALL_AREA#250], MetastoreRelation rdvs, t_rdvs_courier_info, Some(t4)

spark hive执行树相关推荐

  1. hadoop + spark+ hive 集群搭建(apache版本)

    0. 引言 hadoop 集群,初学者顺利将它搭起来,肯定要经过很多的坑.经过一个星期的折腾,我总算将集群正常跑起来了,所以,想将集群搭建的过程整理记录,分享出来,让大家作一个参考. 由于搭建过程比较 ...

  2. [Spark][Hive][Python][SQL]Spark 读取Hive表的小例子

    [Spark][Hive][Python][SQL]Spark 读取Hive表的小例子 $ cat customers.txt 1 Ali us 2 Bsb ca 3 Carls mx $ hive ...

  3. 最新spark,hive,flink,kafka,hadoop,zookeeper,flume,java,maven,Apache历史版本大全下载

    最新spark,hive,flink,kafka,hadoop,zookeeper,flume,java,maven,Apachek开源框架历史版本下载 TP通道 >>  www.apac ...

  4. spark hive udf java_【填坑六】 spark-sql无法加载Hive UDF的jar

    /usr/custom/spark/bin/spark-sql --deploy-mode client add jar hdfs://${clusterName}/user/hive/udf/udf ...

  5. 案例解析丨Spark Hive自定义函数应用

    摘要:Spark目前支持UDF,UDTF,UDAF三种类型的自定义函数. 1. 简介 Spark目前支持UDF,UDTF,UDAF三种类型的自定义函数.UDF使用场景:输入一行,返回一个结果,一对一, ...

  6. Spark Hive 云原生改造在智领云的应用

    引 言 随着 Kubernetes 越来越成熟,使用者越来越多,大数据应用上云的需求也越来越迫切.原有的大数据资源管理器 Yarn 很难做到所有应用资源统一控制,完全隔离,带来的主机应用和大数据计算应 ...

  7. spark hive 结合处理 把多行变成多列

    原数据格式 : gid       id      score a1        1       90 a1        2      80 a1       3      79 a1       ...

  8. 鸟叔的linux私房菜+大数据(Hardoop/Spark/Hive) 电子书分享

    <鸟哥的Linux私房菜-基础篇>第四版.pdf 链接:https://pan.baidu.com/s/1I1WD0v8FODFWmWQKbo6BkQ  提取码:t1mi Python编程 ...

  9. Flume+Spark+Hive+Spark SQL离线分析系统

    前段时间把Scala和Spark一起学习了,所以借此机会在这里做个总结,顺便和大家一起分享一下目前最火的分布式计算技术Spark!当然Spark不光是可以做离线计算,还提供了许多功能强大的组件,比如说 ...

最新文章

  1. SHELL学习——退出状态、测试(整数\字符串\ 文件\逻辑运算符)
  2. jittor和pytorch生成网络对比之acgan
  3. python 数据分析学什么-python数据分析师要学什么
  4. PowerShell脚本遇到的问题汇总
  5. BigInteger用法-Java大数据存储、运算
  6. java之代理设计模式
  7. 我的世界java 内存_我的世界如何分配内存
  8. 无线多串口服务器,多串口通信服务器
  9. 我真的是前端公众号 NO.1 ?
  10. 利用matlab绘制函数图像
  11. 远程连接linux服务器文件共享,linux mount 远程服务器共享目录
  12. 5G计费方式将迎来彻底变化 运营商向2B服务出手?
  13. atitit.窗体静听esc退出本窗体java swing c# .net php
  14. LINUX矩阵键盘简单介绍,矩阵键盘程序流程图详细介绍
  15. 建无根树+无根树转有根树
  16. win7下安装Jira(破解加汉化)
  17. 线性稳压芯片的选取要素
  18. 推荐系统 --- 工程
  19. 3dsmax2018可编辑多边形常用操作及部分快捷键
  20. 罗马帝国 Ancient Rome 简易修改器

热门文章

  1. C++ 共享指针四宗罪
  2. 夏日将尽:以太坊的ERC-777能否顺利取代ERC-20?
  3. $‘\r‘: command not found @FDDLC
  4. rg1 蓝光危害rg0_蓝光危害检测标准IEC/TR 62778
  5. 路由器限制计算机访问,路由器如何禁止设备访问指定网站?
  6. Android使用RtmpDump进行RTMP推流介绍
  7. 数字集成电路设计的流程2
  8. 技本功丨互联网+工业视觉异常检测分析
  9. Vulnhub_HACKSUDO: THOR
  10. 批量修改文件后缀名,文件名