微服务治理之分布式链路追踪–3.zipkin实战

本节是基于zipkin分布式追踪系统搭建,因为对 scalaplay framework 2 框架不熟悉,所以,没有采用opentelemetry 的sdk来实现play框架的追踪功能。scala有zio-telemetry库来实现opentelemetry方案,但是本人不太了解如何在play框架中引入zio-telemetry,如果有大佬熟悉这块可以留言评论。


文章目录

  • 微服务治理之分布式链路追踪--3.zipkin实战
  • 前言
  • 一、环境构建
    • 1. jaeger搭建(podman单机)
  • 二、代码解析
    • 1. goframe 实现
    • 2. play framework实现
    • 3. jaeger展示
  • 总结

前言

本次实验backend采用的是 jaeger,协议使用的是zipkin协议。play框架引入的第三方的库: play-zipkin-tracing-play

因为goframe框架中gtrace模块propagation使用了go.opentelemetry.io/otel/propagation. TraceContexthttps://www.w3.org/TR/trace-context/标准. play-zipkin-tracing-play采用的是Zipkin B3 format,两者之间不兼容,所以,本次实验goframe没有采用官方的gtrace模块,直接引入了zipkin官方的go library: zipkin-go

实验环境:

  • goframe: 1.16.6
  • playframework: 2.8.8
  • scala: 2.13.5
  • golang: 1.17

一、环境构建

说明:jaeger(官网) 部署有多种方式,开发阶段可以采用podman单机部署all-in-one镜像。本次实验采用docker部署jaeger。

1. jaeger搭建(podman单机)

根据官网的operator部署方案部署jaeger。其中,jaeger的后端存储采用的是es.

jaeger.yaml:

podman run -d --name jaeger \-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \-p 5775:5775/udp \-p 6831:6831/udp \-p 6832:6832/udp \-p 5778:5778 \-p 16686:16686 \-p 14268:14268 \-p 14250:14250 \-p 9411:9411 \jaegertracing/all-in-one:1.27

二、代码解析

1. goframe 实现

废话不多说,直接上关键代码。

代码如下(示例):

go.mod:

require (github.com/gogf/gf v1.16.6github.com/opentracing/opentracing-go v1.2.0 // indirectgithub.com/openzipkin/zipkin-go v0.3.0
)

hello.go(controller)

package apiimport ("log""net/http""github.com/gogf/gf/frame/g""github.com/gogf/gf/net/ghttp""github.com/openzipkin/zipkin-go"zipkinhttp "github.com/openzipkin/zipkin-go/middleware/http"httpreporter "github.com/openzipkin/zipkin-go/reporter/http"
)var Index = indexApi{}type indexApi struct{}// Index is a demonstration route handler for output "Hello World!".
func (*indexApi) Index(r *ghttp.Request) {tracer := getZipkinTracer("trace_request_zipkin", "127.0.0.1")// create a root spanspan := tracer.StartSpan("trace_zipkin_start")trace_span_a(tracer, span)defer span.Finish()g.Log().Line().Skip(1).Infof("trace-service-a msg: %s", "index")r.Response.Writeln("Hello World!")
}func trace_span_a(tracer *zipkin.Tracer, span zipkin.Span) {// create a child spanchildSpan := tracer.StartSpan("trace_span_a", zipkin.Parent(span.Context()))defer childSpan.Finish()trace_span_b(tracer, childSpan)
}func trace_span_b(tracer *zipkin.Tracer, span zipkin.Span) {// create a child spanchildSpan := tracer.StartSpan("trace_span_b", zipkin.Parent(span.Context()))defer childSpan.Finish()// create global zipkin traced http clientclient, err := zipkinhttp.NewClient(tracer, zipkinhttp.ClientTrace(true))if err != nil {log.Printf("unable to create client: %+v\n", err)}// initiate a call to some_funcreq, err := http.NewRequest("GET", "http://localhost:9000/hello", nil)if err != nil {log.Printf("unable to create http request: %+v\n", err)}// create a zipkin context with span to send downstream servicectx := zipkin.NewContext(req.Context(), childSpan)req = req.WithContext(ctx)res, err := client.DoWithAppSpan(req, "trace_play_framework")if err != nil {log.Printf("unable to do http request: %+v\n", err)}res.Body.Close()
}// create a zipkin tracer
func getZipkinTracer(serviceName string, ip string) *zipkin.Tracer {// create a reporter to be used by the tracerreporter := httpreporter.NewReporter("http://localhost:9411/api/v2/spans")// set-up the local endpoint for our serviceendpoint, _ := zipkin.NewEndpoint(serviceName, ip)// set-up our sampling strategysampler := zipkin.NewModuloSampler(1)// initialize the tracertracer, _ := zipkin.NewTracer(reporter,zipkin.WithLocalEndpoint(endpoint),zipkin.WithSampler(sampler),)return tracer
}

注:9411是zipkin的端口

2. play framework实现

代码目录:

废话不多说,直接上关键代码。

build.sbt.:

name := """trace-test-scala"""
organization := "com.example"version := "1.0-SNAPSHOT"lazy val root = (project in file(".")).enablePlugins(PlayScala)scalaVersion := "2.13.6"libraryDependencies ++= Seq(ws,guice,// import play-zipkin-tracing-play library"io.zipkin.brave.play" %% "play-zipkin-tracing-play" % "3.0.2", "net.logstash.logback" % "logstash-logback-encoder" % "5.3","org.scalatestplus.play" %% "scalatestplus-play" % "5.0.0" % Test
)

application.yaml.:

play.http.filters=Filterstrace {service-name = "zipkin-api-sample"zipkin {base-url = "http://localhost:9411" // set zipkin portsample-rate = 1 //set-up our sampling strategy}
}zipkin-trace-context {fork-join-executor {parallelism-factor = 20.0parallelism-max = 200}
}
play.modules.enabled  += "brave.play.module.ZipkinModule"

logback.xml.:

<!-- https://www.playframework.com/documentation/latest/SettingsLogger -->
<configuration><conversionRule conversionWord="coloredLevel" converterClass="play.api.libs.logback.ColoredLevel" /><appender name="FILE" class="ch.qos.logback.core.FileAppender"><file>${application.home:-.}/logs/application.log</file><encoder><pattern>%date [%level] from %logger in %thread %marker - %.-512message %n%xException</pattern></encoder></appender><appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"><encoder><pattern>%coloredLevel %date from %logger in %thread %marker - %.-512message %n%xException</pattern></encoder></appender><appender name="ASYNCFILE" class="ch.qos.logback.classic.AsyncAppender"><appender-ref ref="FILE" /></appender><appender name="ASYNCSTDOUT" class="ch.qos.logback.classic.AsyncAppender"><appender-ref ref="STDOUT" /></appender><logger name="play" level="INFO" /><logger name="controllers" level="DEBUG" /><logger name="clients" level="DEBUG" /><logger name="services" level="DEBUG" /><!-- Off these ones as they are annoying, and anyway we manage configuration ourselves --><logger name="com.avaje.ebean.config.PropertyMapLoader" level="OFF" /><logger name="com.avaje.ebeaninternal.server.core.XmlConfigLoader" level="OFF" /><logger name="com.avaje.ebeaninternal.server.lib.BackgroundThread" level="OFF" /><logger name="com.gargoylesoftware.htmlunit.javascript" level="OFF" /><root level="WARN"><appender-ref ref="ASYNCFILE" /><appender-ref ref="ASYNCSTDOUT" /></root></configuration>

HelloController.scala.:

package controllersimport logging.RequestMarkerContext
import javax.inject._
import play.api._
import play.api.mvc._
import play.api.{Logger, MarkerContext}
import play.api.libs.json.Json/** This controller creates an `Action` to handle HTTP requests to the* application's home page.*/
@Singleton
class HelloController @Inject() (components: ControllerComponents// service: Service
) extends AbstractController(components)with RequestMarkerContext {private lazy val logger = Logger(this.getClass)/** Create an Action to render an HTML page.** The configuration in the `routes` file means that this method will be* called when the application receives a `GET` request with a path of `/`.*/def hello() = Action { implicit request: Request[AnyContent] =>logger.info(s"header is ${request.headers}")Ok(Json.obj("result" -> "ok"))}
}

RequestMarkerContext.scala.:

package loggingimport play.api.MarkerContext
import play.api.mvc.RequestHeaderimport scala.collection.JavaConverters._
import java.security.MessageDigest
import java.util.UUIDtrait RequestMarkerContext {private def getMarkersMap(requestHeader: RequestHeader) = Map("method" -> requestHeader.method,"uri" -> requestHeader.uri,"x-b3-spanid" -> requestHeader.headers.get("x-b3-spanid").getOrElse(hashMD5(UUID.randomUUID().toString).substring(8, 24)),"x-b3-traceid" -> requestHeader.headers.get("x-b3-traceid").getOrElse(hashMD5(UUID.randomUUID().toString).substring(8, 24)))implicit def requestHeaderToMarkerContext(implicitrequestHeader: RequestHeader): MarkerContext = {import net.logstash.logback.marker.Markers._MarkerContext(appendEntries(getMarkersMap(requestHeader).asJava))}implicit def requestHeaderToMarkerContextMap(implicitrequestHeader: RequestHeader): Map[String, String] = getMarkersMap(requestHeader)def hashMD5(content: String): String = {val md5 = MessageDigest.getInstance("MD5")val encoded = md5.digest((content).getBytes)encoded.map("%02x".format(_)).mkString}
}

LoggingFilter.scala.:

package filterimport akka.stream.Materializer
import play.api.mvc.{Filter, RequestHeader, Result}import javax.inject.Inject
import scala.concurrent.{ExecutionContext, Future}class LoggingFilter @Inject() (implicitval mat: Materializer,ec: ExecutionContext
) extends Filter {val headerNamesToBePropagated = Set("x-request-id","x-b3-traceid","x-b3-spanid","X-B3-TraceId","X-B3-SpanId","x-b3-parentspanid","x-b3-sampled","x-b3-flags","trace-id","span-id","Trace-Id","Span-Id")def apply(nextFilter: RequestHeader => Future[Result])(requestHeader: RequestHeader): Future[Result] = {val headersToBePropagated = requestHeader.headers.headers.filter(h =>headerNamesToBePropagated.contains(h._1))nextFilter(requestHeader).map { result =>result.withHeaders(headersToBePropagated: _*)}}
}

Filters.scala.:

import filter.LoggingFilter
import play.api.http.{DefaultHttpFilters, EnabledFilters}
import play.filters.gzip.GzipFilter
import brave.play.filter.ZipkinTraceFilterimport javax.inject.Injectclass Filters @Inject() (defaultFilters: EnabledFilters,gzip: GzipFilter,log: LoggingFilter,trace: ZipkinTraceFilter
) extends DefaultHttpFilters(defaultFilters.filters :+ gzip :+ trace :+ log: _*)

3. jaeger展示

trace timeline:

trace graph:


总结

本次实验演示了基于play frameworkgoframe框架构建基础的微服务系统,然后基于zipkin分布式链路追踪系统进行服务间的链路追踪。通过这次实验了解了大概的分布式链路追踪系统是如何构建,运行。但还遗留了以下几个问题:

  1. play框架如何集成 opentelmetry标准的能力。
  2. goframe框架自身提供了 gtrace 模块,基于otel标准可以很好的将 kafka,redis这些中间件服务纳入链路追踪体系中,zipkin方案如何做到这点。
  3. 我司线上服务是run on k8s ,并且用到了istio服务网格,链路追踪系统在服务网格的微服务治理体系中如何发挥出该有的价值。
  4. 微服务的可观察性这块是个比较大的课题,logging、tracing、metrics三者之间如何构建、运行、协调可以做个专题来研究。

微服务治理之分布式链路追踪--3.zipkin实战相关推荐

  1. 分布式链路追踪SkyWalking进阶实战之RPC上报和WebHook通知(三)

    目录 1.自定义SkyWalking链路追踪配置 1.1 什么是TraceId 1.2 使用的背景 1.3 编码 2.SkyWalking-RocketBot性能剖析 3.SkyWalking链路追踪 ...

  2. 原来10张图就可以搞懂分布式链路追踪系统原理

    分布式系统为什么需要链路追踪? 随着互联网业务快速扩展,软件架构也日益变得复杂,为了适应海量用户高并发请求,系统中越来越多的组件开始走向分布式化,如单体架构拆分为微服务.服务内缓存变为分布式缓存.服务 ...

  3. Sleuth + Zipkin 微服务分布式链路追踪

    在开发环境中对业务问题的排查可以debugger计算时差等问题进行处理,如果架构复杂微服务嗲用众多,这样的方式就显得鸡肋. 如何快速发现问题? 如何判断故障影响范围? 如何梳理服务依赖以及依赖的合理性 ...

  4. NET Core微服务之路:SkyWalking+SkyApm-dotnet分布式链路追踪系统的分享

    对于普通系统或者服务来说,一般通过打日志来进行埋点,然后再通过elk或splunk进行定位及分析问题,更有甚者直接远程服务器,直接操作查看日志,那么,随着业务越来越复杂,企业应用也进入了分布式服务化的 ...

  5. 微服务链路追踪_.NET Core微服务:分布式链路追踪系统分享

    (给DotNet加星标,提升.Net技能) 转自:另一个老李 cnblogs.com/SteveLee/p/10463200.html 对于普通系统或者服务来说,一般通过打日志来进行埋点,然后再通过e ...

  6. 分布式服务框架原理与实践pdf_深度解析微服务治理的技术演进和架构实践

    为什么需要服务治理? 第一.业务需求 随着业务的发展,服务越来越多,如何协调线上运行的各个服务,保障服务的SLA,对服务架构和运维人员是一个很大的挑战.随着业务规模的不断扩大,小服务资源浪费等问题逐渐 ...

  7. 微服务链路追踪之zipkin搭建

    前言 微服务治理方案中,链路追踪是必修课,SpringCloud的组件其实使用很简单,生产环境中真正令人头疼的往往是软件维护,接口在微服务间的调用究竟哪个环节出现了问题,哪个环节耗时较长,这都是项目上 ...

  8. 「Java分享客栈」随时用随时翻:微服务链路追踪之zipkin搭建

    前言 微服务治理方案中,链路追踪是必修课,SpringCloud的组件其实使用很简单,生产环境中真正令人头疼的往往是软件维护,接口在微服务间的调用究竟哪个环节出现了问题,哪个环节耗时较长,这都是项目上 ...

  9. 当我们在说微服务治理的时候究竟在说什么

    点击上方"方志朋",选择"设为星标" 回复"666"获取新整理的面试文章 来源:https://urlify.cn/EZviUr 自从微服务 ...

最新文章

  1. 40个迹象表明你还是PHP菜鸟
  2. Codeforces 911F Tree Destruction
  3. Docker两个问题的讨论
  4. 从servlet中获取spring的WebApplicationContext
  5. Web Hacking 101 中文版 十八、内存(一)
  6. JAVA线程并发数量控制_Java并发工具类(三):控制并发线程数的Semaphore
  7. c# 开发项目的过程
  8. centos安装输入法
  9. 最全微信小程序源码项目开发代码合集
  10. java爬虫 webcollector_Java爬虫-WebCollector | 学步园
  11. 软件测试 -- 入门 4 软件测试原则
  12. JSONArray.fromObject(str)
  13. 增强型绿植植被指数_辽宁省增强型植被指数EVI
  14. 相似图片搜索原理和JAVA代码实现
  15. java 切图_分布式切图服务——切图篇
  16. 机器学习西瓜书-代价曲线
  17. ngx之日志切割 、ngx信号
  18. 专门画像素图的软件_画像素的软件
  19. Linux 安装rabbitMQ guest账号登录总是提示失败
  20. 完整的struts2框架应用实例

热门文章

  1. 损耗的基础知识(中)
  2. idea第一次上传代码到gitlab
  3. Javascript 设计模式之代理模式【讲师辅导】-曾亮-专题视频课程
  4. TensorFlow入门之二:tensorflow手写数字识别
  5. java折线图_Java系列:JFreeChart在线制作折线图
  6. S3C2450自动升级
  7. 神经网络matlab实现
  8. Linux终端程序用c语言实现改变输出的字的颜色 (转)
  9. c语言 字符串转换中文乱码,怎么将unicode转中文字符编码存在文本中
  10. 基础实验——485传感器修改地址