`awslogs-group` log option:
```console
$ docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup ...
```
### awslogs-stream
To configure which
[log stream](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html)
should be used, you can specify the `awslogs-stream` log option. If not
specified, the container ID is used as the log stream.
> [!NOTE]
>
> Log streams within a given log group should only be used by one container
> at a time. Using the same log stream for multiple containers concurrently
> can cause reduced logging performance.
### awslogs-create-group
Log driver returns an error by default if the log group doesn't exist. However, you can set the
`awslogs-create-group` to `true` to automatically create the log group as needed.
The `awslogs-create-group` option defaults to `false`.
```console
$ docker run \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=myLogGroup \
--log-opt awslogs-create-group=true \
...
```
> [!NOTE]
>
> Your AWS IAM policy must include the `logs:CreateLogGroup` permission before
> you attempt to use `awslogs-create-group`.
### awslogs-create-stream
By default, the log driver creates the AWS CloudWatch Logs stream used for container log persistence.
Set `awslogs-create-stream` to `false` to disable log stream creation. When disabled, the Docker daemon assumes
the log stream already exists. A use case where this is beneficial is when log stream creation is handled by
another process avoiding redundant AWS CloudWatch Logs API calls.
If `awslogs-create-stream` is set to `false` and the log stream does not exist, log persistence to CloudWatch
fails during container runtime, resulting in `Failed to put log events` error messages in daemon logs.
```console
$ docker run \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=myLogGroup \
--log-opt awslogs-stream=myLogStream \
--log-opt awslogs-create-stream=false \
...
```
### awslogs-datetime-format
The `awslogs-datetime-format` option defines a multi-line start pattern in [Python
`strftime` format](https://strftime.org). A log message consists of a line that
matches the pattern and any following lines that don't match the pattern. Thus
the matched line is the delimiter between log messages.
One example of a use case for using
this format is for parsing output such as a stack dump, which might otherwise
be logged in multiple entries. The correct pattern allows it to be captured in a
single entry.
This option always takes precedence if both `awslogs-datetime-format` and
`awslogs-multiline-pattern` are configured.
> [!NOTE]
>
> Multi-line logging performs regular expression parsing and matching of all log
> messages, which may have a negative impact on logging performance.
Consider the following log stream, where new log messages start with a
timestamp:
```console
[May 01, 2017 19:00:01] A message was logged
[May 01, 2017 19:00:04] Another multi-line message was logged
Some random message
with some random words
[May 01, 2017 19:01:32] Another message was logged
```
The format can be expressed as a `strftime` expression of
`[%b %d, %Y %H:%M:%S]`, and the `awslogs-datetime-format` value can be set to
that expression:
```console
$ docker run \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=myLogGroup \
--log-opt awslogs-datetime-format='\[%b %d, %Y %H:%M:%S\]' \
...
```
This parses the logs into the following CloudWatch log events:
```console
# First event
[May 01, 2017 19:00:01] A message was logged
# Second event
[May 01, 2017 19:00:04] Another multi-line message was logged
Some random message
with some random words
# Third event
[May 01, 2017 19:01:32] Another message was logged
```
The following `strftime` codes are supported:
| Code | Meaning | Example |