Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confidence score for KPI findings #40

Open
janniclas opened this issue Nov 26, 2024 · 0 comments
Open

Confidence score for KPI findings #40

janniclas opened this issue Nov 26, 2024 · 0 comments
Labels
breaking enhancement New feature or request

Comments

@janniclas
Copy link
Contributor

janniclas commented Nov 26, 2024

Each RawValueKPI must contain a confidence score (0 <= confidence <= 100, 100 we are as confident as it gets). The confidence score reflects

  1. how confident we are in the correctness of the finding (how likely is a secret a real secret - see trufflehog verified vs unverified secrets)
  2. how to prioritize the findings (confidence in a CVE score could be based on the existence of an exploit - see https://github.com/BenjiTrapp/cve-prio-marble)

We must provide a default way to calculate the confidence for every given tool in their tool adapter.
We must provide an optional input to the adapter to override the default input. This can be used from the SPHA-CLI as follows transform --tool osv --confidence 50 to set the confidence for this specific transformation.
It must be decided how to use --confidence for dynamic calculations. It is easy to set a fixed confidence score for all transformed results, e.g., just say tool results of this tool always have this confidence. However, it also might make sense to have the confidence depend on some parts of the tool results. Customizing this is more difficult and requires further thought. The naive solution to just override our calculation with whatever value --confidence provides is sufficient for the start.

calculate must be extended to take a --ignore-confidence flag to disable all confidence ratings.
calculate must be extended to take confidence into account. We must decide where to integrate confidence in the calculation. Most likely in every KPI calculation. We want to make the confidence configurable through the KPI hierarchy. How should this work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant