Simplifies usage of Kinesis Streams by delivering data directly to target (no need to write Consumer)
Model
- Delivery Stream - main entity
- No need specify shards/partition keys
- Data record - 1000 KB
- Destination
- S3 bucket
- records are concatenated into larger objects
- compression: gzip, zip, snappy
- needs IAM role
- Supports SSE-KMS
- Redshift table
- uses intermediate S3 bucket
- issues COPY command continously
- no error-tolerance
- skipped objects are written to manifest file (S3)
- Compression: gzip
- At-least-once semantics - duplicates possible (like SQS)
- Retention: 24h (if destination is not available)
- Retries are automatic
Amazon Kinesis Agent
- Monitors files and sends records to Kinesis Firehose
- Handles file rotation, checkpointing
- Similiar to CloudWatch Agent (Logs)
- Also works with Kinesis streams
Buffer
- Size (1MB-128MB)
- Time (60s-900s (15m)
- Buffer may be raised if delivery falls behind
No comments:
Post a Comment