Files
authentik/website/docs/sys-mgmt/ops/storage-s3.md

190 lines
6.5 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
title: S3 storage setup
---
## Preparation
First, create a user on your S3 storage provider and get access credentials (hereafter referred to as `access_key` and `secret_key`).
You will also need the S3 API endpoint that authentik will use (hereafter referred to as `https://s3.provider`). When using AWS S3, theres no need to set the endpoint, but for S3-compatible services like Azure Blob Storage or Cloudflare R2, use the provider's endpoint URL.
Create or pick a bucket for authentik data, for example `authentik-data`. Adjust the name to your providers bucket naming rules.
The domain you use to access authentik is referred to as `authentik.company` in the examples below.
You will also need the AWS CLI available locally.
## S3 configuration
### Bucket creation
Create the bucket that authentik will use for media files:
```bash
AWS_ACCESS_KEY_ID=access_key AWS_SECRET_ACCESS_KEY=secret_key aws s3api --endpoint-url=https://s3.provider create-bucket --bucket=authentik-data --acl=private
```
If using AWS S3, you can omit `--endpoint-url`, but you may need to specify `--region`. Some regions require `--create-bucket-configuration LocationConstraint=<region>`.
The bucket ACL is set to private. Depending on your provider you can alternatively disable ACLs and rely on bucket policies.
### Bucket policy
The following actions need to be allowed on the bucket:
```text
ListObjectsV2
GetObject
PutObject
CreateMultipartUpload
CompleteMultipartUpload
AbortMultipartUpload
DeleteObject
HeadObject
```
The following policy can be used in AWS:
```json IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::<bucket_name>"
},
{
"Sid": "ObjectLevelAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload",
"s3:CreateMultipartUpload",
"s3:CompleteMultipartUpload",
"s3:HeadObject"
],
"Resource": "arn:aws:s3:::<bucket_name>/*"
}
]
}
```
### CORS policy
Apply a CORS policy to the bucket, allowing the authentik web interface to access images directly.
Save the following as `cors.json` (use your deployments origin; include scheme and port if nonstandard):
```json
{
"CORSRules": [
{
"AllowedOrigins": ["https://authentik.company"],
"AllowedHeaders": ["Authorization"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000
}
]
}
```
If authentik is accessed from multiple domains, include each one in `AllowedOrigins`.
Apply the policy to the bucket:
```bash
AWS_ACCESS_KEY_ID=access_key AWS_SECRET_ACCESS_KEY=secret_key aws s3api --endpoint-url=https://s3.provider put-bucket-cors --bucket=authentik-data --cors-configuration=file://cors.json
```
### Content-Type
Browsers rely on the HTTP `Content-Type` header to determine how to handle files; render HTML, display an image, or perform another action.
Ensure that files uploaded to S3 have the correct `Content-Type` header set. If this header is missing or incorrect, browsers may fail to render content properly. For example, images might not display at all. The following command updates the `Content-Type` header for all PNG images in an AWS S3 bucket, and can be adapted for other file types:
```bash
aws s3 cp \
s3://<bucket_name>/ s3://<bucket_name>/ \
--exclude "*" --include "*.png" \
--no-guess-mime-type \
--content-type "image/png" \
--metadata-directive "REPLACE" \
--recursive
```
:::note Terraform uploads
The `Content-Type` header is not set when files are programmatically uploaded to S3 via Terraform.
:::
### Configuring authentik
Add the following to your `.env` file:
```env
AUTHENTIK_STORAGE__BACKEND=s3
AUTHENTIK_STORAGE__S3__ACCESS_KEY=access_key
AUTHENTIK_STORAGE__S3__SECRET_KEY=secret_key
AUTHENTIK_STORAGE__S3__BUCKET_NAME=authentik-data
```
If you are using AWS S3, add:
```env
AUTHENTIK_STORAGE__S3__REGION=us-east-1 # Use the region of the bucket
```
If you are using an S3compatible provider (nonAWS), add:
```env
AUTHENTIK_STORAGE__S3__ENDPOINT=https://s3.provider
AUTHENTIK_STORAGE__S3__CUSTOM_DOMAIN=s3.provider/authentik-media
```
If your provider only supports legacy S3 signatures, also set:
```env
AUTHENTIK_STORAGE__S3__SIGNATURE_VERSION=s3
```
By default, authentik uses signature version `s3v4`.
The `AUTHENTIK_STORAGE__S3__ENDPOINT` setting controls how authentik communicates with the S3 provider. When set, it overrides region/`USE_SSL`.
The `AUTHENTIK_STORAGE__S3__CUSTOM_DOMAIN` setting controls how media URLs are built for the web interface. It must include the bucket name and must not include a scheme.
For a path-style domain, set `AUTHENTIK_STORAGE__S3__CUSTOM_DOMAIN=s3.provider/authentik-media`. The object `application-icons/application.png` will be available at `https://s3.provider/authentik-media/application-icons/application.png`.
Whether URLs use HTTPS is controlled by `AUTHENTIK_STORAGE__S3__SECURE_URLS` (defaults to `true`). Depending on your provider, you can also use a virtual hosted-style domain such as `authentik-data.s3.provider`.
:::info
You can omit `ACCESS_KEY` and `SECRET_KEY` when using AWS SDK authentication (instance roles or profiles). See `AUTHENTIK_STORAGE__S3__SESSION_PROFILE` and related options in the configuration reference](../../install-config/configuration/configuration.mdx#storage-settings).
:::
For more options (including `AUTHENTIK_STORAGE__S3__USE_SSL`, session profiles, and security tokens), see the [configuration reference](../../install-config/configuration/configuration.mdx#storage-settings).
## Migrating between storage backends
The following assumes the local storage path is `/data` and the bucket is `authentik-data`. Ensure your `aws` CLI is configured to talk to your provider (add `--endpoint-url` or `--region` as needed).
### From file to s3
Follow the setup steps above, then sync files from the local directory to S3 (to the bucket root):
```bash
aws s3 sync /data s3://authentik-data/
# For non-AWS providers, include the endpoint:
# aws --endpoint-url=https://s3.provider s3 sync /data s3://authentik-data/
```
### From s3 to file
```bash
aws s3 sync s3://authentik-data/ /data
# For non-AWS providers:
# aws --endpoint-url=https://s3.provider s3 sync s3://authentik-data/ /data
```