Skip to content

Commit

Permalink
cds-756 Add DLQ Support to Shipper Lambda (#86)
Browse files Browse the repository at this point in the history
* cargo

* Feature/ecr tests (#63)

ECR Tests added

* add env variable to control architecture selection in Makefile

* added x86_64 build path to publish workflows

* added CpuArch Parameter and template values to synchronised template job

* fix bucket name

* Feature/cw metadata (#65)

* add loggroup to metadata

* added support for X86

* v1.0.1

* align cargo.toml version with template version

* fix bucket name for dev sync (#70)

* allow sync to run manually (#71)

* Custom Lambda Addition (#74)

* add support for multiple msk topic (#75)

* Cloudwatch subscription update [CDS-1120] (#79)

* update the cloudwatch custom lambda to delete subscription after lambda deletion

* update Policies

* update changelog

* csv custome header added (#76)


---------

Co-authored-by: Concourse <[email protected]>

* added test to custom metadata and csv header

---------

Co-authored-by: guyrenny <[email protected]>
Co-authored-by: Concourse <[email protected]>

* Cloudwatch integration update custom lambda [CDS-1136] (#80)

* Update cloudwatch custom lambda so it will be possible to see log group as trigger

* update permission

* remove depend on

* v1.0.3

* align cargo.toml version with template version

* fixed typo

* Cds 756 Add DLQ Support (#85)

* added functionality to handle nested sqs events using receursion

added s3 upload function for failed sqs retry limits

* add sqs config values

* modify parameters to handler function to use new clients type

* updated tests to use the new clients type

added tests for dlq flow

* added new dependencies

* removed unused imports

* remove comments

* commented out unused variables

* cargo fmt

* cargo fmt

* added test for CloudWatch failure dlq event and modified s3 dlq event

* added anyhow

* adjusted how s3 object keys are presented in for S3 and Cloudwatch objects on dlq

* update tests

* added debug prints

updated dlq s3 event workflow to store original event if object is unavailable

* added DLQ support for DLQ. merged all custom resource code into one parameter
based function and added functionality to the custom resource to allow
it to configure lambda for DLQ

* cargo.lock

* changelog

* update readme

* added default retry delay

* add location for static files

* updated readme

* removed cds from changelog

---------

Co-authored-by: juan-coralogix <[email protected]>
Co-authored-by: Concourse <[email protected]>
Co-authored-by: juan-coralogix <[email protected]>
Co-authored-by: guyrenny <[email protected]>
  • Loading branch information
5 people authored Apr 23, 2024
1 parent 6f28022 commit a40cdc1
Show file tree
Hide file tree
Showing 16 changed files with 3,586 additions and 573 deletions.
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# Changelog
## v1.0.4 / 2024-04-25
### 💡 Enhancements 💡
- Added support for DLQ

## v1.0.3 / 2024-04-09
### 💡 Enhancements 💡
- Support multiple topics for msk integration
Expand Down
106 changes: 79 additions & 27 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,11 @@ temp-env = { version = "0.3.6", features = ["async_closure"] }
aws-smithy-runtime = { version = "1.1.7", features = ["test-util"] }
aws-smithy-types = "1.1.7"
base64 = "0.22.0"
aws-sdk-sqs = "1.18.0"
chrono = "0.4.37"
async-recursion = "1.1.0"
md5 = "0.7.0"
anyhow = "1.0.81"

[dev-dependencies]
pretty_assertions_sorted = "1.2.1"
30 changes: 23 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,6 @@ If you don’t want to send data directly as it enters S3, you can also use SNS/
| SQSTopicArn | The ARN for the SQS queue that contains the SQS subscription responsible for retrieving logs from Amazon S3. | | |
| CSVDelimiter | Specify a single character to be used as a delimiter when ingesting a CSV file with a header line. This value is applicable when the S3Csv integration type is selected, for example, “,” or ” “. | , | |


### CloudWatch Configuration

Coralogix can be configured to receive data directly from your CloudWatch log group. CloudWatch logs are streamed directly to Coralogix via Lambda. This option does not use S3. You must provide the log group name as a parameter during setup.
Expand Down Expand Up @@ -158,9 +157,9 @@ We can receive direct [Kinesis](https://aws.amazon.com/kinesis/) stream data fro

Your Lambda function must be in a VPC that has access to the MSK cluster. You can configure your VPC via the provided [VPC configuration parameters](#vpc-configuration-optional).

| Parameter | Description | Default Value | Required |
|------------|-----------------------------------------------------|---------------|--------------------|
| MSKBrokers | Comma-delimited list of MSK brokers to connect to. | | :heavy_check_mark: |
| Parameter | Description | Default Value | Required |
|------------|-------------------------------------------------------|---------------|--------------------|
| MSKBrokers | Comma-delimited list of MSK brokers to connect to. | | :heavy_check_mark: |
| KafkaTopic | Comma separated list of Kafka topics to Subscribe to. | | :heavy_check_mark: |

### Generic Configuration (Optional)
Expand All @@ -173,7 +172,8 @@ These are optional parameters if you wish to receive notification emails, exclud
| BlockingPattern | Enter a regular expression to identify lines excluded from being sent to Coralogix. For example, use `MainActivity.java:\d{3}` to match log lines with `MainActivity` followed by exactly three digits. | | |
| SamplingRate | Send messages at a specific rate, such as 1 out of every N logs. For example, if your value is 10, a message will be sent for every 10th log. | 1 | :heavy_check_mark: |
| AddMetadata | Add metadata to the log message. Expects comma separated values. Options for S3 are `bucket_name`,`key_name`. For CloudWatch use `stream_name`, `loggroup_name` . | | |
| CustomMetadata | Add custom metadata to the log message. Expects comma separated values. Options are key1=value1,key2=value2 | | |
| CustomMetadata | Add custom metadata to the log message. Expects comma separated values. Options are key1=value1,key2=value2 | | |

### Lambda Configuration (Optional)

These are the default presets for Lambda. Read [Troubleshooting](#troubleshooting) for more information on changing these defaults.
Expand Down Expand Up @@ -209,8 +209,24 @@ If you wish to use dynamic values for the Application and Subsystem Name paramet

**S3 folder:** Use the following tag: `{{s3_key.value}}` where the value is the folder level. For example, if the file path that triggers the event is `AWSLogs/112322232/ELB1/elb.log` or `AWSLogs/112322232/ELB2/elb.log` and you want ELB1 and ELB2 to be the subsystem, your `subsystemName` should be `{{s3_key.3}}`

**S3Csv Custom Headers:** Add Environment Variable "CUSTOM_CSV_HEADER" with the key names. This must be with the same delimiter as the CSV archive, for example if the csv file delimiter is ";", then your environment varialble should be like this:
CUSTOM_CSV_HEADER = name;country;age
**S3Csv Custom Headers:** Add Environment Variable "CUSTOM_CSV_HEADER" with the key names. This must be with the same delimiter as the CSV archive, for example if the csv file delimiter is ";", then your environment varialble should be like this: CUSTOM_CSV_HEADER = name;country;age

### DLQ

A Dead Letter Queue (DLQ) is a queue where messages are sent if they cannot be processed by the Lambda function. This is useful for debugging and monitoring.

The DLQ workflow for the Coralogix AWS Shipper is as follows:

![DLQ Workflow](./static/dlq-workflow.png)

To enable the DLQ, you must provide the required parameters outlined below.

| Parameter | Description | Default Value | Required |
|---------------|-------------------------------------------------------------------------------|---------------|--------------------|
| EnableDLQ | Enable the Dead Letter Queue for the Lambda function. | false | :heavy_check_mark: |
| DLQS3Bucket | An S3 bucket used to store all failure events that have exhausted retries. | | :heavy_check_mark: |
| DLQRetryLimit | The number of times a failed event should be retried before being saved in S3 | 3 | :heavy_check_mark: |
| DLQRetryDelay | The delay in seconds between retries of failed events | 900 | :heavy_check_mark: |

## Troubleshooting

Expand Down
Loading

0 comments on commit a40cdc1

Please sign in to comment.