diff --git a/docs/source/api_gateway.rst b/docs/source/api_gateway.rst index 6c6d9cf3..6887c152 100644 --- a/docs/source/api_gateway.rst +++ b/docs/source/api_gateway.rst @@ -4,7 +4,7 @@ API Gateway Integration Define an HTTP endpoint ----------------------- -SCAR allows to transparently integrate an HTTP endpoint with a Lambda function. To enable this functionality you only need to define an api name and SCAR will take care of the integration process (before using this feature make sure you have to correct rights set in your aws account). +SCAR allows to transparently integrate an HTTP endpoint with a Lambda function via API Gateway. To enable this functionality you only need to define an API name and SCAR will take care of the integration process (before using this feature make sure you have to correct rights set in your aws account). The following configuration file creates a generic api endpoint that redirects the http petitions to your lambda function:: @@ -37,12 +37,12 @@ SCAR also allows you to make an HTTP request, for that you can use the command ` scar invoke -f api-cow.yaml This command automatically creates a `GET` request and passes the petition to the API endpoint defined previously. -Bear in mind that the timeout for the api gateway requests is 29s, so if the function takes more time to respond, the api will return an error message. -To launch asynchronous functions you only need to add the `asynch` parameter to the call:: +Bear in mind that the timeout for the API Gateway requests is 29s. Therefore, if the function takes more time to respond, the API will return an error message. +To launch asynchronous functions you only need to add the `-a` parameter to the call:: scar invoke -f api-cow.yaml -a -However, remember that when you launch an asynchronous function throught the API Gateway there is no way to know if the function finishes successfully until you check the function invocation logs. +When you invoke an asynchronous function through the API Gateway there is no way to know if the function finishes successfully until you check the function invocation logs. POST Request ------------ @@ -65,9 +65,9 @@ or:: scar invoke -n scar-cowsay -db /tmp/img.jpg The file specified after the parameter ``-db`` is codified and passed as the POST body. -Take into account that the file limitations for request response and asynchronous requests are 6MB and 128KB respectively, as specified in the `AWS lambda documentation `_. +Take into account that the file limitations for request response and asynchronous requests are 6MB and 128KB respectively, as specified in the `AWS Lambda documentation `_. -Lastly, You can also submit JSON as the body to the HTTP endpoint with no other configuration, as long Content-Type is application/json. If SCAR sees a JSON body, it will write this body to /tmp/{REQUEST_ID}/api_event.json. Otherwise, it will default the post body to it being a file. +Lastly, you can also submit a JSON as the body of the request to the HTTP endpoint with no other configuration, as long as `Content-Type` is `application/json`. If SCAR detects a JSON body, it will write this body to the file `/tmp/{REQUEST_ID}/api_event.json`. Otherwise, the body will be considered to be a file. This can invoked via the cli:: @@ -77,7 +77,7 @@ This can invoked via the cli:: Passing parameters in the requests ---------------------------------- -You can add parameters to the invocations adding the parameters to the configuration file like this:: +You can add parameters to the invocations adding the `parameters` section to the configuration described as follows:: cat >> api-cow.yaml << EOF functions: @@ -95,4 +95,3 @@ You can add parameters to the invocations adding the parameters to the configura or:: scar invoke -n scar-cowsay -p '{"key1": "value1", "key2": "value3"}' - diff --git a/docs/source/batch.rst b/docs/source/batch.rst index 6e7bcb3f..347a28a9 100644 --- a/docs/source/batch.rst +++ b/docs/source/batch.rst @@ -6,9 +6,9 @@ AWS Batch Integration Define a job to be executed in batch ------------------------------------ -SCAR allows to transparently integrate a Batch job execution. To enable this functionality you only need to set the execution mode of the lambda function to one of the two available used to create batch jobs ('lambda-batch' or 'batch') and SCAR will take care of the integration process (before using this feature make sure you have the correct rights set in your aws account). +SCAR allows to transparently integrate the executions of jobs through `AWS Batch `_. To enable this functionality you only need to set the execution mode of the Lambda function to one of the two available used to create batch jobs ('lambda-batch' or 'batch') and SCAR will take care of the integration process (before using this feature make sure you have the correct rights set in your aws account). -The following configuration file defines a lambda function that creates a batch job (the required script can be found in `mrbayes-sample-run.sh `_):: +The following configuration file defines a Lambda function that creates an AWS Batch job (the required script can be found in `mrbayes-sample-run.sh `_):: cat >> scar-mrbayes-batch.yaml << EOF functions: @@ -24,20 +24,20 @@ The following configuration file defines a lambda function that creates a batch scar init -f scar-mrbayes-batch.yaml -Combine lambda and batch executions +Combine AWS Lambda and AWS Batch executions ----------------------------------- -As explained in section :doc:`/prog_model`, if you define an output bucket as the input bucket of another function, a workflow can be created. -By doing this, batch and lambda executions can be combined through S3 events. +As explained in the section :doc:`/prog_model`, if you define an output bucket as the input bucket of another function, a workflow can be created. +By doing this, AWS Batch and AWS Lambda executions can be combined through S3 events. -An example of this execution can be found in the `video process example `_ +An example of this execution can be found in the `video process example `_. Limits ------ -When defining a Batch job have in mind that the `Batch service `_ has some limits that are lower than the `Lambda service `_. +When defining an AWS Batch job have in mind that the `AWS Batch service `_ has some limits that are lower than the `Lambda service `_. For example, the Batch Job definition size is limited to 24KB and the invocation payload in Lambda is limited to 6MB in synchronous calls and 128KB in asynchronous calls. -To create the Batch job, the Lambda function defines a Job with the payload content included, and sometimes (i.e. when the script passed as payload is greater than 24KB) the Batch Job definition can fail. +To create the AWS Batch job, the Lambda function defines a Job with the payload content included, and sometimes (i.e. when the script passed as payload is greater than 24KB) the Batch Job definition can fail. The payload limit can be avoided by redefining the script used and passing the large payload files using other service (e.g S3 or some bash command like 'wget' or 'curl' to download the information in execution time). diff --git a/docs/source/scar_container.rst b/docs/source/scar_container.rst index 7e4eabac..13d9773c 100644 --- a/docs/source/scar_container.rst +++ b/docs/source/scar_container.rst @@ -1,7 +1,7 @@ -Scar container +SCAR container ============== -Other option to use SCAR is to create the container with the binaries included or to use the already available image with the packaged binaries installed from `grycap/scar `_. Either you want to build the images from scratch or you want to use the already available image you will need `docker `_ installed in your machine. +Other option to use SCAR is to create the container with the binaries included or to use the already available image with the packaged binaries installed from `grycap/scar `_. Either you want to build the images from scratch or you want to use the already available image you will need `Docker `_ installed in your machine. Building the SCAR image ^^^^^^^^^^^^^^^^^^^^^^^ @@ -14,7 +14,7 @@ This command creates a scar image in your docker repository that can be launched docker run -it -v $AWS_CREDENTIALS_FOLDER:/home/scar/.aws -v $SCAR_CONFIG_FOLDER:/home/scar/.scar scar -With the previous command we tell docker to mount the SCAR required folders (`~/.aws` and `~/.scar`) in the paths expected by the binary. +With the previous command we tell Docker to mount the folders required by SCAR (`~/.aws` and `~/.scar`) in the paths expected by the binary. Launching the container with the command described above also allow us to have different configuration folders wherever we want in our host machine. Once we are inside the container you can execute SCAR like another system binary:: diff --git a/examples/video-process/README.md b/examples/video-process/README.md index e49e47e8..85996e95 100644 --- a/examples/video-process/README.md +++ b/examples/video-process/README.md @@ -2,15 +2,15 @@ In this example we are going to process an input video. The video is going to be split in different images and then those images are going to be analyzed by a neural network. -Two different functions are defined to do this work: first, a function that creates a batch job that splits the video and stores it in S3; second, a lambda function that process each image and stores the result also in S3. +Two different Lambda functions are defined to do this work: first, a function that creates an AWS Batch job that splits the video and stores it in S3; second, a Lambda function that processes each image and stores the result also in Amazon S3. -The two different configuration files can be found in this folder. the file 'scar-batch-ffmpeg-split.yaml' defines a function that creates a batch job and the file 'scar-lambda-darknet.yaml' defines a functions that analyzes the images created. +The two different configuration files can be found in this folder. The file 'scar-batch-ffmpeg-split.yaml' defines a function that creates a Batch job and the file 'scar-lambda-darknet.yaml' defines a functions that analyzes the images created. -More information about the batch integration can be found in the [official documentation](https://scar.readthedocs.io/en/latest/batch.html). +More information about the AWS Batch integration can be found in the [official documentation](https://scar.readthedocs.io/en/latest/batch.html). ## Create the infrastructure -To create the infrastructure you only need to execute two commands: +To create the functions you only need to execute two commands: ```sh scar init -f scar-batch-ffmpeg-split.yaml @@ -21,7 +21,7 @@ scar init -f scar-lambda-darknet.yaml ## Launch the execution -The to launch an execution you have to upload a file to the defined input bucket of the batch function, in this case the following command will start the execution: +In order to launch an execution you have to upload a file to the defined input bucket of the Lambda function that creates the AWS Batch job. In this case, the following command will start the execution: ```sh scar put -b scar-ffmpeg -bf scar-ffmpeg-split/input -p ../ffmpeg/seq1.avi @@ -29,7 +29,7 @@ scar put -b scar-ffmpeg -bf scar-ffmpeg-split/input -p ../ffmpeg/seq1.avi ## Process the output -When the execution of the function finishes, the script used produces two output files for each lambda invocation. SCAR copies them to the S3 bucket specified as output. To check if the files are created and copied correctly you can use the command: +When the execution of the function finishes, the script used produces two output files for each Lambda invocation. SCAR copies them to the S3 bucket specified as output. To check if the files are created and copied correctly you can use the command: ```sh scar ls -b scar-ffmpeg -bf scar-ffmpeg-split/image-output @@ -58,7 +58,7 @@ scar get -b scar-ffmpeg -bf scar-batch-ffmpeg-split/image-output -p /tmp/lambda/ This command creates and `image-output` folder and all the subfolders in the `/tmp/lambda/` folder -## Delete the infrastructure +## Delete the Lambda functions Don't forget to delete the functions when you finish your testing: