How to Configure AWS Lambda?
AWS Lambda is a responsive cloud service that examines the steps followed within any application and responds to them by compiling the codes that have been defined by the users, also known as functions. The service automatically computes and manages the resources across multiple availability zones and functions through them whenever new actions are called. AWS Lambda supports languages like Java, Python, and Node.js for writing the codes, and the service can also launch its processes in languages that are supported by Amazon Linux (like Bash, Go & Ruby).
The Anatomy of the Lambda Console:
When you open a function in the AWS Lambda console, you are presented with a dashboard that serves as your central hub for configuration. We will explore the most critical sections: Core Function Configuration, Triggers, and Destinations.
Core Function Configuration (The "General configuration" Tab)
These are the most fundamental settings that directly impact your function's performance, cost, and security. You can find them by navigating to the "Configuration" tab and then "General configuration".
1. Memory (RAM)
- What it is: The amount of RAM, in megabytes, allocated to your function's execution environment. You can set this from 128 MB to 10,240 MB.
- Why it matters: This is the primary lever for performance tuning. Lambda allocates CPU power proportionally to the amount of memory you configure. A function with 512 MB of memory will have roughly twice the CPU power of one with 256 MB. For CPU-bound tasks, increasing memory is the best way to decrease execution time.
- Best Practice: The default of 128 MB is fine for simple tasks, but for any real workload, you should test different memory settings. Use tools like the AWS Lambda Power Tuning tool to automatically find the most cost-effective memory allocation for your specific function.
2. Timeout
- What it is: The maximum amount of time your function is allowed to run per execution, up to a maximum of 900 seconds (15 minutes).
- Why it matters: This is a critical safety mechanism. It prevents a function with a bug (e.g., an infinite loop) from running indefinitely and incurring huge costs. The timeout should be set just long enough to accommodate your function's longest expected execution time, including any potential "cold start" delays.
- Best Practice: Set a realistic timeout. A 3-second default for a function that should take 500ms is a good starting point. Avoid setting it to the 15-minute maximum unless absolutely necessary.
3. Execution Role
- What it is: The IAM Role that grants your function the permissions it needs to interact with other AWS services.
- Why it matters: This is the core of Lambda security. A function with a basic role can only write logs to Amazon CloudWatch. To read a file from S3 or write an item to DynamoDB, you must explicitly add those permissions to its execution role.
- Best Practice: Always follow the Principle of Least Privilege. Create a unique, specific role for each Lambda function with only the permissions it absolutely needs to perform its task. Avoid using overly permissive, general-purpose roles.
Connecting Your Function: Triggers, Destinations, and Layers
Triggers
- What it is: The AWS service or event source that invokes your function. This is what makes your function "event-driven."
- How it works: You can configure a trigger from the console by clicking "Add trigger". You then select the source, such as an API Gateway endpoint, an S3 bucket, or an SQS queue. When an event occurs in that source (e.g., an HTTP request arrives, a file is uploaded), it will automatically invoke your function and pass the event data to it.
Destinations
- What it is: An advanced feature that allows you to route the results of a function's execution (on success or on failure) to a downstream AWS service.
- Why it matters: This is incredibly useful for building resilient, asynchronous workflows. A common pattern is to set up an SQS queue or SNS topic as a destination for failed invocations. This creates a dead-letter queue (DLQ) where you can store and later analyze events that your function failed to process.
Layers
- What it is: A way to package and share libraries, dependencies (e.g., the
pandasorrequestslibraries in Python), or even custom runtimes across multiple Lambda functions. - Why it matters: Layers help keep your function's deployment package small and make dependency management much cleaner. Instead of bundling the same large libraries with every function, you can place them in a Layer and attach it to any function that needs it.
Configuring AWS Lambda
In order to configure AWS Lambda for the first time follow the below steps:
Step 1: Sign In to your AWS Account.
Step 2: Select Lambda from the services section of the homepage.

Step 3: Click on create a function and for the basic level choose to select a blueprint. A blueprint is simply a sample code for most commonly used cases that can be used directly.

Step 4: For the basic level, just search for a blueprint named hello-world-python. Select it and fill the necessary details which is generally just the name. Hit the create function button and your Lambda function is ready to be used.


Step 5: The created function's page looks like this:

Now add a trigger point that will run the function. The trigger can be any other service provided by AWS like Alexa Skills Kit OR DynamoDB or any other partner event sources. For example, add a dynamo DB Table to it. Now the function will work once new entries are made on the table or existing ones are modified or deleted.
Step 6: To view the logs on the function select the monitoring tab from the lambda page. And then visit view logs in CloudWatch.

Benefits of AWS- Lambda:
- No need to register lambda tasks like Amazon SWF activities.
- Existing Lambda functions can be reused in workflows.
- Lambda functions are called directly by Amazon SWF, there is no need to design a program to implement and execute them.
- Lambda provides the metrics and logs for tracking function executions.
Limits in AWS- Lambda: There are three types of Lambda limits:
1. Resources Limit:
Resource | Default Limit |
|---|---|
| Ephemeral disk capacity ("/tmp" space) | 512 MB |
| Number of file descriptors | 1024 |
| Number of processes and threads (combined total) | 1024 |
| Maximum execution duration per request | 300 sec |
| Invoke request body payload size | 6 MB |
| Invoke response body payload size | 6 MB |
2. Service Limits:
ITEM | Default Limit |
|---|---|
| Lambda function deployment package size (.zip/.jar file) | 50 MB |
| Size of code/dependencies that you can zip into a deployment package (uncompressed zip/jar size) | 250 MB |
| Total size of all the deployment packages that can be uploaded per region | 1.5 GB |
| Number of unique event sources of the Scheduled Event source type per account | 50 |
| Number of unique Lambda functions you can connect to each Scheduled Event | 5 |
3. Throttle Limit: The throttle limit is 100 concurrent Lambda function executions per account and is applied to the total concurrent executions across all the functions within the same region. The formula to calculate the number of concurrent executions for the below function:
function = (average duration of the function execution)
X (number of requests or events processed by AWS Lambda)
When the throttle limit is reached, it returns a throttling error having an error code 429. After a 15-30 minute user can start working again. The throttle limit can be increased by contacting the AWS support center.