Skip to content

Commit

Permalink
Adding Feature ECR Image Scan Report (#49)
Browse files Browse the repository at this point in the history
New ECR Image Scan Event Support added.

---------

Co-authored-by: Rafał Sumisławski <[email protected]>
Co-authored-by: royfur <[email protected]>
  • Loading branch information
3 people authored Feb 2, 2024
1 parent 162c5e5 commit b79aa2a
Show file tree
Hide file tree
Showing 11 changed files with 367 additions and 28 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/sync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ jobs:
set +xv
- name: store artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: store
path: |
Expand All @@ -75,7 +75,7 @@ jobs:
- run: mkdir .tmp

- name: download template
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: store
path: .tmp
Expand Down
4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Changelog

## v0.0.14 Beta /
### 🚀 New components 🚀
- Added support for ECR Image Scan
## v0.0.13 Beta / 2024-02-01

### 🧰 Bug fixes 🧰
Expand Down
57 changes: 44 additions & 13 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ version = "0.0.13"
edition = "2021"

[dependencies]
aws_lambda_events = { version = "0.13.1", default-features = false, features = ["s3", "sns", "cloudwatch_logs", "sqs", "kinesis", "kafka"] }
aws_lambda_events = { version = "0.13.1", default-features = false, features = ["s3", "sns", "cloudwatch_logs", "sqs", "kinesis", "kafka", "ecr_scan"] }
aws-config = "1.0.3"
aws-sdk-s3 = "1.5.0"
aws-sdk-ecr = "1.10.0"
aws-sdk-secretsmanager = "1.4.0"
cx_sdk_rest_logs = { git = "ssh://[email protected]/coralogix/coralogix-sdk-rust", default-features = false, features=["rustls"] }
cx_sdk_core = { git = "ssh://[email protected]/coralogix/coralogix-sdk-rust", default-features = false}
Expand Down
8 changes: 7 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,10 @@ Coralogix can be configured to receive data directly from your [Kinesis Stream](

Coralogix can be configured to receive data directly from your [MSK](https://docs.aws.amazon.com/msk/) or [Kafka](https://docs.aws.amazon.com/lambda/latest/dg/with-kafka.html) cluster.

### Amazon ECR Image Security Scan

Coralogix can be configured to recieve ECR [Image Scanning](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html)

## Deployment Options

> **Important:** Before you get started, ensure that your AWS user has the permissions to create Lambda functions and IAM roles.
Expand Down Expand Up @@ -75,14 +79,16 @@ Use an existing Coralogix [Send-Your-Data API key](https://coralogix.com/docs/se
| Parameter | Description | Default Value | Required |
|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|--------------------|
| Application name | This will also be the name of the CloudFormation stack that creates your integration. It can include letters (A–Z and a–z), numbers (0–9) and dashes (-). | | :heavy_check_mark: |
| IntegrationType | Choose the AWS service that you wish to integrate with Coralogix. Can be one of: S3, CloudTrail, VpcFlow, CloudWatch, S3Csv, SNS, SQS, Kinesis, CloudFront. | S3 | :heavy_check_mark: |
| IntegrationType | Choose the AWS service that you wish to integrate with Coralogix. Can be one of: S3, CloudTrail, VpcFlow, CloudWatch, S3Csv, SNS, SQS, CloudFront, Kinesis, Kafka, MSK, EcrScan. | S3 | :heavy_check_mark: |
| CoralogixRegion | Your data source should be in the same region as the integration stack. You may choose from one of [the default Coralogix regions](https://coralogix.com/docs/coralogix-domain/): [Custom, EU1, EU2, AP1, AP2, US1, US2]. If this value is set to Custom you must specify the Custom Domain to use via the CustomDomain parameter. | Custom | :heavy_check_mark: |
| CustomDomain | If you choose a custom domain name for your private cluster, Coralogix will send telemetry from the specified address (e.g. custom.coralogix.com). | | |
| ApplicationName | The name of the application for which the integration is configured. [Advanced Configuration](#advanced-configuration) specifies dynamic value retrieval options. | | :heavy_check_mark: |
| SubsystemName | Specify the [name of your subsystem](https://coralogix.com/docs/application-and-subsystem-names/). For a dynamic value, refer to the Advanced Configuration section. For CloudWatch, leave this field empty to use the log group name. | | :heavy_check_mark: |
| ApiKey | The Send-Your-Data [API Key](https://coralogix.com/docs/send-your-data-api-key/) validates your authenticity. This value can be a direct Coralogix API Key or an AWS Secret Manager ARN containing the API Key. | | :heavy_check_mark: |
| StoreAPIKeyInSecretsManager | Enable this to store your API Key securely. Otherwise, it will remain exposed in plain text as an environment variable in the Lambda function console. | True | :heavy_check_mark: |

> **Note:** EcrScan doesn't need any extra configuration.
### S3/CloudTrail/VpcFlow/S3Csv Configuration

This is the most flexible type of integration, as it is based on receiving log files to Amazon S3. First, your bucket can receive log files from all kinds of other services, such as CloudTrail, VPC Flow Logs, Redshift, Network Firewall or different types of load balancers (ALB/NLB/ELB). Once the data is in the bucket, a pre-made Lambda function will then transmit it to your Coralogix account.
Expand Down
23 changes: 16 additions & 7 deletions src/combined_event.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,20 @@ use aws_lambda_events::event::sns::SnsEvent;
use aws_lambda_events::event::sqs::SqsEvent;
use aws_lambda_events::event::kinesis::KinesisEvent;
use aws_lambda_events::event::kafka::KafkaEvent;
use aws_lambda_events::ecr_scan::EcrScanEvent;
use serde::de::{self, Deserialize, Deserializer};
use serde_json::Value;
use tracing::debug;

#[derive(Debug)]
pub enum CombinedEvent {
S3(S3Event),
Sns(SnsEvent),
CloudWatchLogs(LogsEvent),
Sqs(SqsEvent),
Kinesis(KinesisEvent),
Kafka(KafkaEvent),
EcrScan(EcrScanEvent),
}

impl<'de> Deserialize<'de> for CombinedEvent {
Expand All @@ -22,37 +26,42 @@ impl<'de> Deserialize<'de> for CombinedEvent {
D: Deserializer<'de>,
{
let raw_value: Value = Deserialize::deserialize(deserializer)?;

debug!("raw_value: {:?}", raw_value);
if let Ok(event) = S3Event::deserialize(&raw_value) {
tracing::debug!("s3 event detected");
tracing::info!("s3 event detected");
return Ok(CombinedEvent::S3(event));
}

if let Ok(event) = SnsEvent::deserialize(&raw_value) {
tracing::debug!("sns event detected");
tracing::info!("sns event detected");
return Ok(CombinedEvent::Sns(event));
}
if let Ok(event) = EcrScanEvent::deserialize(&raw_value) {
tracing::info!("ecr scan event detected");
return Ok(CombinedEvent::EcrScan(event));
}

if let Ok(event) = LogsEvent::deserialize(&raw_value) {
tracing::debug!("cloudwatch event detected");
tracing::info!("cloudwatch event detected");
return Ok(CombinedEvent::CloudWatchLogs(event));
}

if let Ok(event) = KinesisEvent::deserialize(&raw_value) {
tracing::debug!("kinesis event detected");
tracing::info!("kinesis event detected");
return Ok(CombinedEvent::Kinesis(event));
}

if let Ok(event) = SqsEvent::deserialize(&raw_value) {
tracing::debug!("sqs event detected");
tracing::info!("sqs event detected");
return Ok(CombinedEvent::Sqs(event));
}


// IMPORTANT: kafka must be evaluated last as it uses an arbitrary map to evaluate records.
// Since all other fields are optional, this map could potentially match any arbitrary JSON
// and result in empty values.
if let Ok(event) = KafkaEvent::deserialize(&raw_value) {
tracing::debug!("kafka event detected");
tracing::info!("kafka event detected");

// kafka events triggering a lambda function should always have at least one record
// if not, it is likely an unsupport or bad event
Expand Down
2 changes: 2 additions & 0 deletions src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ pub enum IntegrationType {
Kinesis,
CloudFront,
Kafka,
EcrScan,
}

impl FromStr for IntegrationType {
Expand All @@ -55,6 +56,7 @@ impl FromStr for IntegrationType {
"CloudFront" => Ok(IntegrationType::CloudFront),
"MSK" => Ok(IntegrationType::Kafka),
"Kafka" => Ok(IntegrationType::Kafka),
"EcrScan" => Ok(IntegrationType::EcrScan),
other => Err(format!("Invalid or Unsupported integration type {}", other)),
}
}
Expand Down
Loading

0 comments on commit b79aa2a

Please sign in to comment.