Firehose

Amazon Data Firehose delivery streams (control plane), JSON 1.1 protocol.

fakecloud implements Amazon Data Firehose's JSON 1.1 control plane: delivery stream CRUD + tagging. Records sent via PutRecord / PutRecordBatch are accepted and acked; data-plane fan-out to S3 / OpenSearch / Redshift / Splunk is not implemented (use the S3 endpoints + a real Firehose if you need actual delivery).

Status: control-plane parity. Data plane stops at acknowledgement — records are not written to destinations.

Supported today

  • Delivery streamsCreateDeliveryStream / DescribeDeliveryStream / ListDeliveryStreams / DeleteDeliveryStream / UpdateDestination. Both DirectPut and KinesisStreamAsSource source types round-trip. Streams progress through CREATING -> ACTIVE on create and DELETING -> deleted on delete.

  • DestinationsExtendedS3DestinationConfiguration, RedshiftDestinationConfiguration, ElasticsearchDestinationConfiguration, AmazonopensearchserviceDestinationConfiguration, SplunkDestinationConfiguration, HttpEndpointDestinationConfiguration, SnowflakeDestinationConfiguration, IcebergDestinationConfiguration. Configuration round-trips verbatim through UpdateDestination.

  • Buffering hintsBufferingHints (SizeInMBs, IntervalInSeconds) are range-checked on CreateDeliveryStream and UpdateDestination:

    • SizeInMBs: 1 - 128 MB.
    • IntervalInSeconds: 0 (disabled) or 60 - 900 s.

    Out-of-range values return InvalidArgumentException with the AWS-shaped message, matching real Firehose.

  • RecordsPutRecord / PutRecordBatch accept records, assign per-record RecordIds, and update DeliveryStreamStatus / LastUpdateTimestamp. Batches up to 500 records / 4 MB are honoured; over-limit batches return ServiceUnavailableException.

  • TagsListTagsForDeliveryStream / TagDeliveryStream / UntagDeliveryStream. Keyed by stream ARN.

Smoke test

fakecloud &

aws --endpoint-url http://localhost:4566 firehose create-delivery-stream \
  --delivery-stream-name events \
  --delivery-stream-type DirectPut \
  --extended-s3-destination-configuration '{
    "RoleARN": "arn:aws:iam::000000000000:role/firehose",
    "BucketARN": "arn:aws:s3:::my-bucket",
    "BufferingHints": {"SizeInMBs": 5, "IntervalInSeconds": 300}
  }'

aws --endpoint-url http://localhost:4566 firehose describe-delivery-stream \
  --delivery-stream-name events

aws --endpoint-url http://localhost:4566 firehose put-record \
  --delivery-stream-name events \
  --record Data=$(echo -n '{"id":"abc"}' | base64)

# Out-of-range BufferingHints is rejected to match real Firehose.
aws --endpoint-url http://localhost:4566 firehose update-destination \
  --delivery-stream-name events \
  --current-delivery-stream-version-id 1 \
  --destination-id destinationId-000000000001 \
  --extended-s3-destination-update 'BufferingHints={SizeInMBs=999,IntervalInSeconds=10}'
# -> InvalidArgumentException

Caveats

Data delivery is not implemented. PutRecord returns a RecordId but the bytes are dropped — no S3 object is written, no Redshift COPY is issued, no OpenSearch document is indexed, no HTTP endpoint is hit. Buffering, format conversion (Parquet/ORC), and dynamic partitioning are all configuration-only.

This is enough to test IAM policy paths, SDK wiring, retry / batch logic, and BufferingHints validation. It is not enough to test downstream delivery semantics.

Source