Firehose
Amazon Data Firehose delivery streams (control plane), JSON 1.1 protocol.
fakecloud implements Amazon Data Firehose's JSON 1.1 control plane: delivery stream CRUD + tagging. Records sent via PutRecord / PutRecordBatch are accepted and acked; data-plane fan-out to S3 / OpenSearch / Redshift / Splunk is not implemented (use the S3 endpoints + a real Firehose if you need actual delivery).
Status: control-plane parity. Data plane stops at acknowledgement — records are not written to destinations.
Supported today
Delivery streams —
CreateDeliveryStream/DescribeDeliveryStream/ListDeliveryStreams/DeleteDeliveryStream/UpdateDestination. BothDirectPutandKinesisStreamAsSourcesource types round-trip. Streams progress throughCREATING -> ACTIVEon create andDELETING -> deletedon delete.Destinations —
ExtendedS3DestinationConfiguration,RedshiftDestinationConfiguration,ElasticsearchDestinationConfiguration,AmazonopensearchserviceDestinationConfiguration,SplunkDestinationConfiguration,HttpEndpointDestinationConfiguration,SnowflakeDestinationConfiguration,IcebergDestinationConfiguration. Configuration round-trips verbatim throughUpdateDestination.Buffering hints —
BufferingHints(SizeInMBs,IntervalInSeconds) are range-checked onCreateDeliveryStreamandUpdateDestination:SizeInMBs: 1 - 128 MB.IntervalInSeconds:0(disabled) or 60 - 900 s.
Out-of-range values return
InvalidArgumentExceptionwith the AWS-shaped message, matching real Firehose.Records —
PutRecord/PutRecordBatchaccept records, assign per-recordRecordIds, and updateDeliveryStreamStatus/LastUpdateTimestamp. Batches up to 500 records / 4 MB are honoured; over-limit batches returnServiceUnavailableException.Tags —
ListTagsForDeliveryStream/TagDeliveryStream/UntagDeliveryStream. Keyed by stream ARN.
Smoke test
fakecloud &
aws --endpoint-url http://localhost:4566 firehose create-delivery-stream \
--delivery-stream-name events \
--delivery-stream-type DirectPut \
--extended-s3-destination-configuration '{
"RoleARN": "arn:aws:iam::000000000000:role/firehose",
"BucketARN": "arn:aws:s3:::my-bucket",
"BufferingHints": {"SizeInMBs": 5, "IntervalInSeconds": 300}
}'
aws --endpoint-url http://localhost:4566 firehose describe-delivery-stream \
--delivery-stream-name events
aws --endpoint-url http://localhost:4566 firehose put-record \
--delivery-stream-name events \
--record Data=$(echo -n '{"id":"abc"}' | base64)
# Out-of-range BufferingHints is rejected to match real Firehose.
aws --endpoint-url http://localhost:4566 firehose update-destination \
--delivery-stream-name events \
--current-delivery-stream-version-id 1 \
--destination-id destinationId-000000000001 \
--extended-s3-destination-update 'BufferingHints={SizeInMBs=999,IntervalInSeconds=10}'
# -> InvalidArgumentExceptionCaveats
Data delivery is not implemented. PutRecord returns a RecordId but the bytes are dropped — no S3 object is written, no Redshift COPY is issued, no OpenSearch document is indexed, no HTTP endpoint is hit. Buffering, format conversion (Parquet/ORC), and dynamic partitioning are all configuration-only.
This is enough to test IAM policy paths, SDK wiring, retry / batch logic, and BufferingHints validation. It is not enough to test downstream delivery semantics.