Intermediate
Introduction: Why Native S3 Notifications Fall Short
If you’ve built event-driven architectures on AWS, you’ve probably used S3 event notifications to trigger Lambda functions when objects land in a bucket. It works — until it doesn’t. The native S3 notification system has hard limitations: you can only route to SNS, SQS, or Lambda; you can’t filter on anything beyond prefix and suffix; and you can’t send the same event type to multiple destinations without awkward workarounds involving SNS fan-out.
Amazon EventBridge integration with S3 changes this entirely. Instead of the limited native notification system, you send S3 events to EventBridge, where you get access to content-based filtering on any field in the event payload, routing to over 20 target types, archive and replay capabilities, and schema discovery. This is how modern event-driven architectures on AWS should work.
Who should read this: You’re an intermediate AWS developer or DevOps engineer who understands S3 basics and has used Lambda or SQS before. You want to build more sophisticated event routing — maybe you need to process images differently than CSVs, route events to multiple services simultaneously, or filter on object size or metadata. This article will get you there with working code and real-world patterns.
Prerequisites
- An AWS account with permissions to manage S3, EventBridge, Lambda, SQS, and IAM
- AWS CLI v2 installed and configured (
aws --versionshould return 2.x) - Python 3.9+ for Lambda examples
- Basic familiarity with JSON event patterns and IAM policies
How S3 Event Notifications via EventBridge Actually Work
Before diving into code, let’s understand the architecture. When you enable EventBridge notifications on an S3 bucket, every supported S3 event is published to the default event bus in Amazon EventBridge for that AWS account and region. This is a fundamentally different mechanism than native S3 notifications.
Native S3 Notifications vs. EventBridge
| Feature | Native S3 Notifications | S3 via EventBridge |
|---|---|---|
| Targets | Lambda, SQS, SNS only | 20+ targets (Lambda, SQS, SNS, Step Functions, ECS, CodePipeline, API Gateway, etc.) |
| Filtering | Prefix and suffix only | Any field in the event JSON (key, size, metadata, etc.) |
| Multiple rules per event type | No — one destination per event type/prefix/suffix combo | Yes — unlimited rules can match the same event |
| Replay | No | Yes — archive and replay events |
| Schema discovery | No | Yes |
| Delivery latency | Typically seconds | Typically seconds (comparable) |
| Cost | Free (S3 side) | $1.00 per million events published |
Key detail: Enabling EventBridge notifications on a bucket does not disable existing native S3 notifications. Both systems can run simultaneously. The EventBridge events are published in CloudEvents-like format under the source aws.s3 with detail-type values like Object Created, Object Deleted, Object Restore Completed, etc.
The Event Structure
Here’s what an actual S3 event looks like when it arrives in EventBridge:
{
"version": "0",
"id": "a]1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
"detail-type": "Object Created",
"source": "aws.s3",
"account": "123456789012",
"time": "2024-01-15T12:00:00Z",
"region": "us-east-1",
"resources": [
"arn:aws:s3:::my-bucket"
],
"detail": {
"version": "0",
"bucket": {
"name": "my-bucket"
},
"object": {
"key": "uploads/images/photo.jpg",
"size": 1048576,
"etag": "d41d8cd98f00b204e9800998ecf8427e",
"sequencer": "0055AED6DCD90281E5"
},
"request-id": "C3D13FE58DE4C810",
"requester": "123456789012",
"source-ip-address": "203.0.113.15",
"reason": "PutObject"
}
}
Notice the richness here: you get the object key, size, etag, the reason for creation (PutObject, CopyObject, CompleteMultipartUpload), the requester, and even the source IP. All of these fields are available for filtering in EventBridge rules.
Step 1: Enable EventBridge Notifications on Your S3 Bucket
This is a one-time configuration per bucket. You can do it via the CLI, console, or IaC.
Using AWS CLI
# Enable EventBridge notifications on an existing bucket
aws s3api put-bucket-notification-configuration \
--bucket my-data-bucket \
--notification-configuration '{
"EventBridgeConfiguration": {}
}'
⚠️ Critical warning: The put-bucket-notification-configuration command replaces the entire notification configuration. If you have existing native notifications (Lambda triggers, SQS destinations, etc.), you must include them in the same command or they will be deleted. Always read the current configuration first:
# Get current notification configuration FIRST
aws s3api get-bucket-notification-configuration \
--bucket my-data-bucket
# Then merge EventBridgeConfiguration into the existing JSON
Here’s the safe way to add EventBridge without destroying existing config:
#!/bin/bash
BUCKET="my-data-bucket"
# Get existing config
EXISTING=$(aws s3api get-bucket-notification-configuration --bucket $BUCKET)
# Add EventBridgeConfiguration using jq
UPDATED=$(echo "$EXISTING" | jq '. + {"EventBridgeConfiguration": {}}')
# Apply the merged configuration
aws s3api put-bucket-notification-configuration \
--bucket $BUCKET \
--notification-configuration "$UPDATED"
echo "EventBridge notifications enabled on $BUCKET"
Using CloudFormation / SAM
Resources:
MyDataBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-data-bucket
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: true
Step 2: Create EventBridge Rules with Advanced Filtering
This is where the power comes in. Let’s build several real-world filtering scenarios.
Scenario 1: Route Only Large Image Uploads to a Processing Lambda
You want to trigger an image processing Lambda only when JPEG or PNG files larger than 1MB are uploaded to the uploads/images/ prefix.
# Create the EventBridge rule with content-based filtering
aws events put-rule \
--name "large-image-uploads" \
--event-pattern '{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-data-bucket"]
},
"object": {
"key": [{
"prefix": "uploads/images/"
}],
"size": [{
"numeric": [">", 1048576]
}]
}
}
}' \
--state ENABLED \
--description "Trigger image processing for large image uploads"
Look at what we’re doing here: we’re filtering on detail.object.key using a prefix match, AND on detail.object.size using a numeric comparison. You cannot do either of these with native S3 notifications. The numeric filtering alone — routing based on object size — is a game changer for many workflows.
Scenario 2: Filter by File Extension Using Suffix Matching
aws events put-rule \
--name "csv-file-processing" \
--event-pattern '{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-data-bucket"]
},
"object": {
"key": [{
"suffix": ".csv"
}]
},
"reason": ["PutObject", "CompleteMultipartUpload"]
}
}' \
--state ENABLED
Notice we also filter on reason — we only want events from direct uploads and multipart uploads, not from CopyObject operations. This prevents duplicate processing when objects are copied within the bucket.
Scenario 3: Wildcard Matching with Dynamic Paths
EventBridge supports wildcard matching. Say your objects follow the pattern data/{customer-id}/reports/*.parquet:
aws events put-rule \
--name "parquet-report-ingestion" \
--event-pattern '{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-data-bucket"]
},
"object": {
"key": [{
"wildcard": "data/*/reports/*.parquet"
}]
}
}
}' \
--state ENABLED
Note: Wildcard matching with * in event patterns was introduced in 2022. The * matches any sequence of characters. This is incredibly useful for S3 key patterns that have variable path segments.
Step 3: Add Targets — Multi-Target Routing
A single EventBridge rule can have up to 5 targets. Multiple rules can match the same event, each with their own targets. This is the fan-out capability that makes EventBridge superior to native S3 notifications.
Adding a Lambda Target
First, create a minimal Lambda function for image processing:
import json
import boto3
import urllib.parse
s3_client = boto3.client('s3')
def handler(event, context):
"""
Process S3 events routed through EventBridge.
Note: The event structure is EventBridge envelope, NOT the native S3 event format.
"""
detail = event['detail']
bucket_name = detail['bucket']['name']
# EventBridge S3 events have the key already decoded (no URL encoding)
object_key = detail['object']['key']
object_size = detail['object']['size']
print(f"Processing: s3://{bucket_name}/{object_key} ({object_size} bytes)")
# Get the object
response = s3_client.get_object(Bucket=bucket_name, Key=object_key)
# Your processing logic here
# e.g., generate thumbnail, extract metadata, etc.
return {
'statusCode': 200,
'body': json.dumps({
'processed': f"s3://{bucket_name}/{object_key}"
})
}
Important: The event structure coming from EventBridge is different from the native S3 event notification format. Don’t try to use existing Lambda handlers that expect event['Records'][0]['s3'] — it won’t work. You need event['detail']['bucket']['name'] and event['detail']['object']['key'].
Now add the Lambda function as a target:
# Add Lambda target to the rule
aws events put-targets \
--rule "large-image-uploads" \
--targets '[{
"Id": "image-processor-lambda",
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:image-processor"
}]'
# Grant EventBridge permission to invoke the Lambda function
aws lambda add-permission \
--function-name image-processor \
--statement-id eventbridge-invoke \
--action lambda:InvokeFunction \
--principal events.amazonaws.com \
--source-arn "arn:aws:events:us-east-1:123456789012:rule/large-image-uploads"
Adding Multiple Targets: Lambda + SQS + Step Functions
Here’s where multi-target routing shines. Let’s say for every CSV upload, you want to: (1) trigger a Lambda for quick validation, (2) send to SQS for batch processing, and (3) kick off a Step Functions workflow for full ETL:
# First, create the SQS queue and get its ARN
aws sqs create-queue --queue-name csv-processing-queue
SQS_ARN="arn:aws:sqs:us-east-1:123456789012:csv-processing-queue"
# Add all three targets to the csv-file-processing rule
aws events put-targets \
--rule "csv-file-processing" \
--targets '[
{
"Id": "csv-validator-lambda",
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:csv-validator"
},
{
"Id": "csv-batch-queue",
"Arn": "arn:aws:sqs:us-east-1:123456789012:csv-processing-queue"
},
{
"Id": "csv-etl-workflow",
"Arn": "arn:aws:states:us-east-1:123456789012:stateMachine:csv-etl-pipeline",
"RoleArn": "arn:aws:iam::123456789012:role/EventBridgeStepFunctionsRole"
}
]'
For the SQS target, you need a resource-based policy on the queue allowing EventBridge