Local S3 for integration tests

Run S3 locally for integration tests with fakecloud. 107 S3 operations, versioning, lifecycle, notifications, multipart, replication. Any AWS SDK, no Docker required, free.

Need a local S3 for integration tests? Use fakecloud.

curl -fsSL https://raw.githubusercontent.com/faiscadev/fakecloud/main/install.sh | bash
fakecloud

Point your AWS SDK at http://localhost:4566.

Why fakecloud for S3

Quick examples

Python (boto3)

import boto3
s3 = boto3.client('s3',
    endpoint_url='http://localhost:4566',
    aws_access_key_id='test',
    aws_secret_access_key='test',
    region_name='us-east-1')

s3.create_bucket(Bucket='uploads')
s3.put_object(Bucket='uploads', Key='hello.txt', Body=b'hello world')
obj = s3.get_object(Bucket='uploads', Key='hello.txt')
print(obj['Body'].read())

Node.js (AWS SDK v3)

process.env.AWS_ENDPOINT_URL = 'http://localhost:4566';
process.env.AWS_ACCESS_KEY_ID = 'test';
process.env.AWS_SECRET_ACCESS_KEY = 'test';
process.env.AWS_REGION = 'us-east-1';

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ forcePathStyle: true });

await s3.send(new PutObjectCommand({
  Bucket: 'uploads',
  Key: 'hello.txt',
  Body: 'hello world',
}));

Go

cfg, _ := config.LoadDefaultConfig(ctx,
    config.WithRegion("us-east-1"),
    config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider("test", "test", "")),
    config.WithEndpointResolverWithOptions(aws.EndpointResolverWithOptionsFunc(
        func(s, r string, o ...interface{}) (aws.Endpoint, error) {
            return aws.Endpoint{URL: "http://localhost:4566"}, nil
        },
    )),
)
s3 := s3.NewFromConfig(cfg, func(o *s3.Options) { o.UsePathStyle = true })

AWS CLI

aws --endpoint-url http://localhost:4566 s3 mb s3://uploads
echo "hello" | aws --endpoint-url http://localhost:4566 s3 cp - s3://uploads/hello.txt
aws --endpoint-url http://localhost:4566 s3 ls s3://uploads/

Versioning + lifecycle

aws --endpoint-url http://localhost:4566 s3api put-bucket-versioning \
  --bucket uploads --versioning-configuration Status=Enabled

aws --endpoint-url http://localhost:4566 s3api put-bucket-lifecycle-configuration \
  --bucket uploads --lifecycle-configuration file://lifecycle.json

Versioning returns VersionIds on puts. Lifecycle transitions run on fakecloud's schedule.

Multipart uploads

mpu = s3.create_multipart_upload(Bucket='uploads', Key='large.bin')
parts = []
for i, chunk in enumerate(chunks, start=1):
    resp = s3.upload_part(Bucket='uploads', Key='large.bin', UploadId=mpu['UploadId'],
                          PartNumber=i, Body=chunk)
    parts.append({'PartNumber': i, 'ETag': resp['ETag']})
s3.complete_multipart_upload(Bucket='uploads', Key='large.bin', UploadId=mpu['UploadId'],
                             MultipartUpload={'Parts': parts})

Full multipart API. ETags match AWS.

Notifications: S3 -> Lambda end-to-end

aws --endpoint-url http://localhost:4566 s3api put-bucket-notification-configuration \
  --bucket uploads \
  --notification-configuration '{
    "LambdaFunctionConfigurations": [{
      "LambdaFunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:on-upload",
      "Events": ["s3:ObjectCreated:*"]
    }]
  }'

Now PutObject fires the Lambda for real — not a stub. fakecloud runs Lambda code in real runtime containers across 13 runtimes.

How it differs from alternatives

ToolMulti-languageVersioningNotificationsReal Lambda triggersLifecycle
fakecloudAnyYesYesYes (real code runs)Yes
adobe/S3MockAnyYesNoNo (no Lambda service)Partial
MinIOAnyYesYesN/A (no AWS Lambda service)Yes
gofakes3AnyLimitedNoNoNo
Moto (mock_s3)Python onlyYesStubbedStubbedPartial
DynamoDB LocalN/AN/AN/AN/AN/A (wrong service)

If you need S3 + cross-service wiring (S3 -> SNS/SQS/Lambda actually fires), pick fakecloud.