Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kafka.log.partition.size metric #267

Open
mthoretton opened this issue Aug 28, 2019 · 5 comments
Open

kafka.log.partition.size metric #267

mthoretton opened this issue Aug 28, 2019 · 5 comments

Comments

@mthoretton
Copy link

Hello,

I was looking in the doc, issues and internet, no way to find how to push kafka.log.partition.size metric. It's not part of the default of the default metrics https://docs.datadoghq.com/integrations/kafka/.

Do I need a custom conf or a custom check? thanks!

@mthoretton
Copy link
Author

So, in order to collect it, you need a custom conf (which is actually based on JMX). See https://github.com/DataDog/integrations-core/blob/master/kafka/datadog_checks/kafka/data/conf.yaml.example

This is the conf to add. The order of the parameters in bean_regex matters! I was able to debug thanks to https://docs.datadoghq.com/integrations/faq/troubleshooting-jmx-integrations/?tab=agentv6 .

init_config:
  is_jmx: true
  collect_default_metrics: true
  conf:
    ...
    - include:
        domain: 'kafka.log'
        bean_regex: 'kafka\.log:type=Log,name=Size,topic=(.*?),partition=(.*?)(?:,|$)'
        tags:
          topic: $1
          partition: $2
        attribute:
          Value:
            alias: kafka.log.partition.size
            metric_type: gauge

By adding that we may hit the default 350 JMX metrics limit though.

Feel free to add that to the doc (I could also create a PR) or close this issue.

@seanlutner
Copy link

It would be super awesome if this was part of the documentation. I just spent about an hour feeling like I was crazy because I didn't have this metric and there was no indication that I wouldn't by default.

@jamiealquiza
Copy link
Collaborator

Sorry about that - I forgot that it's a non-default because our monitoring config was setup so long ago, but I'll take note to document this.

@xkrt
Copy link

xkrt commented Mar 18, 2021

Is it possible to gather just topics sizes without breakdown by partitions? It will be very helpful to just monitor topics sizes and not to hit 350 metrics limit.

@jamiealquiza
Copy link
Collaborator

Is it possible to gather just topics sizes without breakdown by partitions? It will be very helpful to just monitor topics sizes and not to hit 350 metrics limit.

Unfortunately topicmappr will need the per-partition sizes since it's making placement decisions at the partition level.

FWIW, the metrics input that it uses for this is formatted in a standardized way that can be sourced from any input as long as it follows the format. See the 3p implementations of https://github.com/DataDog/kafka-kit/tree/master/cmd/metricsfetcher. For reasons other than per-host metrics limits, I've thought about the idea of having an agent/script that's gathering the partition sizes for me in a way other than summing the kafka.log.partition.size by {topic,partition}.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants