I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.
I’ve been trying to see how well Ceilometer, one of the core components of OpenStack that actually provides some of this stuff, would work. Initially, I was a bit bummed, but after fumbling around for a while, I am starting to see the light.

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.
You see, the reason I almost abandoned the idea of using Ceilometer was due to the fact that some of the “meters” it provides are, well, a bit nonsensical (IMHO).

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

For example, there’s network.outgoing.bytes, which is what you would expect… sort of. Turns out, this is a “cumulative” meter. In other words, this meter tells me thetotal number of bytes sent out a given Instance’s virtual NIC. Ceilometer has the following meter types to measure :

  • Cumulative: Increasing over time (e.g.: instance hours)

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

  • Gauge: Discrete items (e.g.: floating IPs, image uploads) and fluctuating values (e.g.: disk I/O)
  • Delta: Changing over time (bandwidth)
Maybe I am naive, but seems quite a bit more helpful to track this as a value based on a given period… you know, so I can get a hint of how much bandwidth a given instance is using. In Ceilometer parlance, this would be a delta metric.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.
I’ll take this aside and defend the fine folks working on Celiometer on this one. Ceilometer was built initially to generate non-repudiable bulling info. Technically, AFAIK, that is the project’s only goal – though it has morphed to gain things like an alarm framework.

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

So, now you can see why network.outgoing.bytes would be cumulative: so you can bill a customer for serving up torrents, etc.

Anyway, I can’t imagine that I’m the only person looking to get Delta metrics out of a Cumulative one, so I thought I’d document my way of getting there. Ultimately there might be a better way, YMMV, caveat emptor, covering my backside, yadda yadda.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Transformers to the Rescue!

… no, not that kind of Transformer. Lemme ‘splain. No, there is too much. Let me sum up.

Actually, before we start diving in, let’s take a quick tour of Ceilometer’s workflow.

The general steps to the Ceilometer workflow are:

Collect -> (Optionally) Transform -> Publish -> Store -> Read

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Collect

There are two methods of collection:

  1. Services (Nova, Neutron, Cinder, Glance, Swift) push data into AMQP and Ceilometer slurps them down
  2. Agents/Notification Handlers poll APIs of the services

This is where our meter data comes from.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Transform/Publish

This is the focus of this post. Transforms are done via the “Pipeline.”

The flow for the Pipeline is:

Meter -> Transformer -> Publisher -> Receiver

  • Meter: Data/Event being collected
  • Transformer: (Optional) Take meters and output new data based on those meters
  • Publisher: How you want to push the data out of Ceilometer
    • To my knowledge, there are only two options:

      • RPC
      • UDP (using msgpack)
  • Receiver: This is the system outside Ceilometer that will receive what the Publisher sends (Logstash for me, at least at the present – will likely move to StatsD + Graphite later on)

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Store

While tangental to this post, I won’t leave you wondering about the “Store” part of the pipeline. Here are the storage options:
* Default: Embedded MongoDB
* Optional:
* SQL
* HBase
* DB2

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Honorable Mention: Ceilometer APIs

Like pretty much everything else in OpenStack, Ceilometer has a suite of OpenAPIsthat can also be used to fetch metering data. I initially considered this route, but in the interest of efficiency (read: laziness), I opted to use the Publisher vs rolling my own code to call the APIs.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Working the Pipeline

There are two Transformers (at least that I see in the source):

  • Scaling
  • Rate of Change

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

In our case, we are interested in the latter, as it will give us the delta between two samples.

To change/use a given Transformer, we need to create a new pipeline via /etc/ceilometer/pipeline.yaml

Here is the default pipeline.yaml:

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

----name: meter_pipelineinterval: 600meters:- "*"transformers:publishers:- rpc://-name: cpu_pipelineinterval: 600meters:- "cpu"transformers:- name: "rate_of_change"parameters:target:name: "cpu_util"unit: "%"type: "gauge"scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"publishers:- rpc://

The “cpu_pipeline” pipeline gives us a good example of what we will need:

  • A name for the pipeline
  • The interval (in seconds) that we want the pipeline triggered

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

  • Which meters we are interested in (“*” is a wildcard for everything, but you can also have an explicit list for when you want the same transformer to act on multiple meters)
  • The name of the transformation we want to use (scaling|rate_of_change)
  • Some parameters to do our transformation:
    • Name: Optionally used if you want to override the metric’s original name * Unit: Like Name, can be used to override the original unit (useful for things like converting network.*.bytes from B(ytes) to MB or GB)
    • Type: If you want to override the default type (remember they are:(cumulative|gauge|delta))

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.
Scale: A snippet of Python for when you want to scale the result in some way (would typically be used along with Unit)

  • Side note: This one seems to be required, as when I omitted it, I got the value of the cumulative metric. Please feel free to comment if I goobered something up there.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Looking at all of this, we can see that the cpu_pipeline, er, pipeline:

  1. Multiplies the number of vCPUs in the instance (resource_metadata.cpu_number) times 1 billion (10^9, or 10**9 in Python)

    • Note the “or 1”, which is a catch for whenresource_metadata.cpu_number doesn’t exist
  2. Divides 100 by the result

The end result is a value that tells us how taxed the Instance’s is from a CPU standpoint, expressed as a percentage.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Bringing it all Home

Armed with this knowledge, here is what I came up with to get a delta metric out of the network.*.bytes metrics:

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Armed with this knowledge, here is what I came up with to get a delta metric out of the network.*.bytes metrics:

    name: network_statsinterval: 10meters:- "network.incoming.bytes"- "network.outgoing.bytes"transformers:- name: "rate_of_change"parameters:target:type: "gauge"scale: "1"publishers:- udp://192.168.255.149:31337

In this case, I’m taking the network.incoming.bytes and network.outgoing.bytesmetrics and passing them through the “Rate of Change” transformer to spit a gauge out of what was previously a comulative metric.

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

I could have (and likely will) taken it a step further and used the scale parameter to change it from bytes to KB. For now, I am playing with OpenStack in a VM on my laptop, so the amount of traffic is small. After all, the difference between 1.1 and 1.4 in a histogram panel in Kibana isn’t very interesting looking :)

Oh, I forgot… the Publisher. Remember how I said the UDP Publisher uses msgpack to stuff its data in? It just so happens that Logstash has both a UDP input and a msgpack codec. As a result, my Receiver is Logstash – at least for now. Again, it would make alot more sense to ship this through StatsD and use Graphite to visualize the data. But, even then, I can still use Logstash’s StatsD output for that. Decisions, decisions :)

Since the data is in Logstash, that means I can use Kibana to make pretty charts with the data.

Here are the bits I added to my Logstash config to make this happen:

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

input {udp {port => 31337codec => msgpacktype => ceilometer}
}

At that point, I get lovely input in ElasticSearch like the following:

{"_index": "logstash-2014.01.16","_type": "logs","_id": "CDPI8-ADSDCoPiqY9YqlEw","_score": null,"_source": {"user_id": "21c98bfa03f14d56bb7a3a44285acf12","name": "network.incoming.bytes","resource_id": "instance-00000009-41a3ff24-f47e-4e29-86ce-4f50a61f78bd-tap30829cd9-5e","timestamp": "2014-01-16T21:54:56Z","resource_metadata": {"name": "tap30829cd9-5e","parameters": {},"fref": null,"instance_id": "41a3ff24-f47e-4e29-86ce-4f50a61f78bd","instance_type": "bfeabe24-08dc-4ea9-9321-1f7cf74b858b","mac": "fa:16:3e:95:84:b8"},"volume": 1281.7777777777778,"source": "openstack","project_id": "81ad9bf97d5f47da9d85569a50bdf4c2","type": "gauge","id": "d66f268c-7ef8-11e3-98cb-000c29785579","unit": "B","@timestamp": "2014-01-16T21:54:56.573Z","@version": "1","tags": [],"host": "192.168.255.147"},

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

 "sort": [1389909296573]
}

Finally, I can then key a Histogram panel in Kibana to key on the “Volume” field for these documents and graph the result, like so:

Screen Shot 2014-01-16 at 4.30.28 PM

I’ve been dabbling a bit more with OpenStack as of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

OK, so maybe not as pretty as I sold it to be, but that’s the data’s fault – not the toolchain :) It will look much more interesting once I mirror this on a live system.

Hopefully someone out there on the Intertubes will find this useful and let me know if there’s a better way to get at this!

原文链接:

https://cjchand.wordpress.com/2014/01/16/transforming-cumulative-ceilometer-stats-to-gauges/

I’ve been dabbling a bit more withOpenStackas of late. If you know me, you can likely guess my goal is how to ingest logs, monitor resources, etc.

Transforming Cumulative Ceilometer Stats to Gauges相关推荐

  1. python脚本性能分析

    在进行python开发时需要对python脚本的性能分析,以便对python脚本进行优化,下面使用cProfile和 pstats对python脚本性能分析. cProfile思路 1.使用cProf ...

  2. Could not get instant cpu stats: cumulative stats decrease

    问题: Could not get instant cpu stats: cumulative stats decrease

  3. scipy.stats.norm

    https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.stats.norm.html scipy.stats.norm = ...

  4. scipy.stats

    https://docs.scipy.org/doc/scipy/reference/stats.html 通用方法 通用方法 描述 rv_continuous([momtype, a, b, xto ...

  5. Ceilometer原理及介绍

    ceilometer的几个概念 ceilometer 主要有下面几个概念: meter 是ceilometer定义的监控项,诸如内存占用,网络IO,磁盘IO等等 sample 是每个采集时间点上met ...

  6. scipy.stats常见概率分布-正态分布与泊松分布

    常见分布 总结统计工作中几个常用用法在python统计函数库scipy.stats的使用范例 常见术语 pdf,概率密度函数(Probability Density Function),连续型随机变量 ...

  7. OLAP(三):Impala介绍 、 (和hive/spark对比)、COMPUTE STATS

    一.Impala概述 Impala是用于处理存储在Hadoop集群中的大量数据的MPP(大规模并行处理)SQL查询引擎.与其他Hadoop的SQL引擎相比,它提供了高性能和低延迟.换句话说,Impal ...

  8. cumulative match score

    1.这是机器学习中knn算法的一个概念,没有找到中文资料,那么可以看这个: https://stats.stackexchange.com/questions/142323/cumulative-ma ...

  9. python 求 gamma 分布_Python stats.gamma方法代码示例

    本文整理汇总了Python中scipy.stats.gamma方法的典型用法代码示例.如果您正苦于以下问题:Python stats.gamma方法的具体用法?Python stats.gamma怎么 ...

最新文章

  1. java处理中文字符_Java中文字符处理的四大迷题
  2. 区块链80%项目靠同一个故事拿钱,但标准链说最坏的时机就是好的开始
  3. setInterval(callback(),time)
  4. 使用ORM提取数据很容易! 是吗?
  5. java scanner nextlin_java – Scanner nextLine()偶尔会跳过输入
  6. 三次握手面试题java_java面试题三次握手和四次挥手-嗨客网
  7. 制作ecc证书(linux命令行)
  8. Linux内核部件分析 设备驱动模型之device
  9. win10同时安装jdk8和jdk11带来的小坑
  10. TeamViewer开机自启动实现在远程使用时重启远程计算机
  11. 【强化学习】强化学习介绍
  12. 冉宝的leetcode笔记--每日一题 8月1日
  13. 《德鲁克管理思想精要》读书笔记4 - 企业诊断工具与目标管理
  14. 大宗商品交易平台:解决期货的最后“一公里”
  15. 图文详解:如何给女朋友解释什么是微服务?
  16. 在用docker部署nginx时,出现curl: (6) Could not resolve host: localhsot; 未知的错误
  17. Flying Saucer生成pdf报表
  18. 【新示例】阿里系行业SaaS,企加云要做IOE的赋能者
  19. Cisco Packet Tracer子网划分,RIP动态路由,DHCP实验
  20. C语言:窗口控制台颜色改变(不断换色)

热门文章

  1. 精品手游小精灵大作战源码 安卓+IOS+H5三端Cocos Creator 客户端 JAVA服务端
  2. 小程序样式text:after不起作用。在开发者工具看不到样式
  3. 计算机在平面设计上的应用,计算机平面设计中设计软件的作用
  4. 联想小工具之----一键关闭防火墙
  5. Directx11教程(52) 实例(instancing)的简单应用
  6. 让Windows Media Player和MP3播放器一样可以显示歌词
  7. ShopEx 模版标签
  8. Java版贪吃蛇游戏
  9. 品牌电商如何逆势增长?在这里,一起预见未来~
  10. Bartender条码使用方法C#