Intermediate
Introduction: Why S3 Native Notifications Aren’t Enough Anymore
If you’ve worked with S3 event notifications for any meaningful amount of time, you’ve hit the wall. The native S3 notification system — which can send events to SNS, SQS, or Lambda — works fine for simple use cases. But the moment you need to route events based on object metadata, send the same event to multiple targets conditionally, or filter on anything beyond a prefix/suffix, you’re stuck writing workaround logic inside Lambda functions.
Amazon EventBridge integration with S3 changes this entirely. Instead of being limited to prefix and suffix filters, you get EventBridge’s full content-based filtering engine — matching on object size, metadata keys, request parameters, and more. Instead of a single destination per notification configuration, you get the ability to create multiple independent rules, each with up to five targets.
Who should read this: You’re a cloud engineer, backend developer, or DevOps practitioner who already understands S3 basics and has some experience with event-driven architectures on AWS. You want to move beyond the limitations of native S3 notifications without building custom routing logic.
Prerequisites
- An AWS account with permissions to manage S3, EventBridge, Lambda, SQS, and IAM
- AWS CLI v2 installed and configured (
aws --versionshould return 2.x) - Basic familiarity with JSON event patterns and IAM policies
- Python 3.9+ if you want to run the Lambda examples
Native S3 Notifications vs. EventBridge: Understanding the Architectural Difference
Before we dive into implementation, let’s be clear about what changes architecturally when you enable EventBridge on an S3 bucket.
Native S3 Notifications (Traditional)
With native notifications, S3 directly delivers events to SNS topics, SQS queues, or Lambda functions. You configure these in the bucket’s notification configuration. The filtering is limited to:
- Event type (e.g.,
s3:ObjectCreated:Put,s3:ObjectRemoved:*) - Key prefix (e.g.,
uploads/) - Key suffix (e.g.,
.jpg)
That’s it. You can’t filter on object size, requester, source IP, or any other attribute. And critically, you cannot have overlapping prefix/suffix filters that route to different destinations — S3 will reject the configuration.
S3 to EventBridge
When you enable EventBridge notifications on a bucket, S3 sends all supported events to the default event bus in the same region. You then create EventBridge rules with event patterns that filter on any field in the event payload. Each rule can have its own targets. Rules are completely independent — overlapping filters are not only allowed, they’re the whole point.
Important: enabling EventBridge does not disable your existing native S3 notification configurations. Both systems work in parallel.
| Feature | Native S3 Notifications | S3 + EventBridge |
|---|---|---|
| Filter on event type | Yes | Yes |
| Filter on key prefix/suffix | Yes | Yes |
| Filter on object size | No | Yes |
| Filter on requester | No | Yes |
| Multiple targets per event | No (1 destination per config) | Yes (up to 5 targets per rule, unlimited rules) |
| Overlapping filters | Not allowed | Allowed |
| Delivery latency | Seconds (typically) | Slightly higher (extra hop through EventBridge) |
| Retry/DLQ support | Limited | Built-in retry policy + DLQ per target |
Step 1: Enable EventBridge Notifications on Your S3 Bucket
This is the foundational step. You need to explicitly opt in to sending events to EventBridge for each bucket.
Using the AWS CLI
# Enable EventBridge notifications on an existing bucket
aws s3api put-bucket-notification-configuration \
--bucket my-data-bucket \
--notification-configuration '{
"EventBridgeConfiguration": {}
}'
Critical warning: The put-bucket-notification-configuration API replaces the entire notification configuration. If you already have SNS/SQS/Lambda notification configurations on the bucket and you run the command above, you will delete them. You must include your existing configuration in the request.
Here’s how to do it safely:
# First, get the existing notification configuration
aws s3api get-bucket-notification-configuration \
--bucket my-data-bucket > current-config.json
# Inspect it, then add the EventBridgeConfiguration key
# For example, if current-config.json has existing LambdaFunctionConfigurations,
# you'd add "EventBridgeConfiguration": {} alongside them.
cat current-config.json
# Edit the file to add "EventBridgeConfiguration": {}
# Then apply the merged configuration
aws s3api put-bucket-notification-configuration \
--bucket my-data-bucket \
--notification-configuration file://current-config.json
Using CloudFormation / SAM
# In your CloudFormation template (YAML):
Resources:
MyDataBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-data-bucket
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: true
Once enabled, S3 will start sending events to the default EventBridge event bus in your account and region. You won’t see any events being consumed yet — you need rules for that.
Step 2: Understanding the S3 Event Structure in EventBridge
To write effective filter rules, you need to know exactly what the event looks like. Here’s a real event structure for an Object Created event as delivered by S3 to EventBridge:
{
"version": "0",
"id": "a]1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
"detail-type": "Object Created",
"source": "aws.s3",
"account": "123456789012",
"time": "2024-01-15T12:34:56Z",
"region": "us-east-1",
"resources": [
"arn:aws:s3:::my-data-bucket"
],
"detail": {
"version": "0",
"bucket": {
"name": "my-data-bucket"
},
"object": {
"key": "uploads/images/photo.jpg",
"size": 1048576,
"etag": "d41d8cd98f00b204e9800998ecf8427e",
"sequencer": "0055AED6DCD90281E5"
},
"request-id": "C3D13FE58DE4C810",
"requester": "123456789012",
"source-ip-address": "203.0.113.15",
"reason": "PutObject"
}
}
Key fields you can filter on:
detail-type— “Object Created”, “Object Deleted”, “Object Restore Completed”, “Object Tags Added”, “Object Tags Deleted”, “Object ACL Updated”, “Object Restore Initiated”, “Object Restore Expired”, “Object Storage Class Changed”, etc.detail.bucket.name— The bucket namedetail.object.key— The full object keydetail.object.size— Object size in bytes (numeric — you can use numeric matching!)detail.reason— “PutObject”, “CopyObject”, “CompleteMultipartUpload”, “DeleteObject”, “Lifecycle”, etc.detail.source-ip-address— The requester’s IP address
Step 3: Creating EventBridge Rules with Advanced Filtering
Now we get to the powerful part. Let’s build several real-world rules.
Rule 1: Route large image uploads to a processing queue
This rule matches only JPEG/PNG files larger than 5 MB uploaded to the uploads/ prefix.
# Create the event pattern file
cat > large-image-pattern.json << 'EOF'
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-data-bucket"]
},
"object": {
"key": [{
"prefix": "uploads/"
}, {
"wildcard": "*.jpg"
}],
"size": [{
"numeric": [">", 5242880]
}]
}
}
}
EOF
# Create the rule
aws events put-rule \
--name "large-image-uploads" \
--event-pattern file://large-image-pattern.json \
--state ENABLED \
--description "Matches JPEG uploads larger than 5MB in the uploads prefix"
Note on key filtering: When you specify multiple filter conditions in an array for the same field, they act as OR logic. When you specify conditions for different fields, they act as AND logic. If you need the key to match both a prefix AND a suffix, you should use the wildcard pattern as shown above, e.g., "wildcard": "uploads/*.jpg".
Let me correct the pattern above for proper AND behavior on the key:
cat > large-image-pattern-v2.json << 'EOF'
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-data-bucket"]
},
"object": {
"key": [{
"wildcard": "uploads/*.jpg"
}],
"size": [{
"numeric": [">", 5242880]
}]
}
}
}
EOF
aws events put-rule \
--name "large-image-uploads" \
--event-pattern file://large-image-pattern-v2.json \
--state ENABLED
Rule 2: Audit all deletions from any source
cat > deletion-audit-pattern.json << 'EOF'
{
"source": ["aws.s3"],
"detail-type": ["Object Deleted"],
"detail": {
"bucket": {
"name": ["my-data-bucket"]
}
}
}
EOF
aws events put-rule \
--name "s3-deletion-audit" \
--event-pattern file://deletion-audit-pattern.json \
--state ENABLED
Adding Targets to Rules
Now, let's attach targets. First, we need an SQS queue and a Lambda function ready. Here's how to add an SQS target to the large image rule:
# Create the SQS queue
aws sqs create-queue --queue-name large-image-processing
# Get the queue ARN
QUEUE_ARN=$(aws sqs get-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/large-image-processing \
--attribute-names QueueArn \
--query 'Attributes.QueueArn' \
--output text)
# Add the target to the EventBridge rule
aws events put-targets \
--rule "large-image-uploads" \
--targets "Id"="sqs-target","Arn"="$QUEUE_ARN"
Don't forget the resource-based policy on SQS! EventBridge needs permission to send messages to the queue:
cat > sqs-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEventBridgeSendMessage",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sqs:SendMessage",
"Resource": "$QUEUE_ARN",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:events:us-east-1:123456789012:rule/large-image-uploads"
}
}
}
]
}
EOF
aws sqs set-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/large-image-processing \
--attributes Policy="$(cat sqs-policy.json | jq -c . | jq -Rs .)"
Step 4: Multi-Target Routing — Sending One Event to Multiple Destinations
This is where EventBridge truly shines over native S3 notifications. Let's say every object creation in a specific bucket should simultaneously:
- Trigger a Lambda function for metadata indexing
- Send a message to SQS for async processing
- Log to CloudWatch Logs for auditing
# Add multiple targets to a single rule (up to 5 targets per rule)
aws events put-targets \
--rule "s3-object-created-all" \
--targets \
"Id"="lambda-indexer","Arn"="arn:aws:lambda:us-east-1:123456789012:function:metadata-indexer" \
"Id"="sqs-processor","Arn"="arn:aws:sqs:us-east-1:123456789012: