- Print
- DarkLight
Installation
Decide up-front on whether you want this feature:
To prevent orphaned data that would be incurred if S3 details are changed, self-service setup of your BYO S3 bucket can only be performed when a node is first created.
Darwinium staff have the ability to change details upon request, but migration is necessary.
Audience
This guide assumes a user with access to your organization's AWS console, and appropriate permissions to:
Create S3 buckets
Create cross-account IAM roles for read-only and write-only access to the bucket.
It also assumes a user with portal access to Darwinium with manage deployments permission on the node being setup.
Step 1 - Obtain External IDs for Read/Write Roles
In the Darwinium portal, navigate to Admin > Nodes
Select the node you wish to configure and click
A dialog will appear. Select the configuration tab, and scroll to S3 cloud storage
Select Cross-Account IAM role for AWS credential type
A section will appear with generated external ids for your S3 read-only and write only IAM roles. Write these down
Step 2 - Create your Darwinium S3 Bucket
Using the AWS console, create a new S3 bucket. Your Bucket may be customized according to your infrastructure requirements. Darwinium recommends the following default settings:
Region: us-east-2
Object Ownership: Bucket owner enforced
Block All Public Access: enabled
Bucket Versioning: enabled
Encryption: enabled with S3 managed keys (SSE-S3) - This is strongly recommended
Write down the ARN of your S3 bucket - this will be needed in future steps.
Step 3- Create IAM read-only and write-only roles
To enable Darwinium to access your Customer Hosted S3 Bucket, create AWS IAM read-only and write-only roles which Darwinium components will assume to gain access to your bucket.
This AWS-recommended role assumption process will be access-controlled by an External Id and means you do not need to provide static credentials to your AWS environment.
Create S3 Read-Only Policy
Go to IAM > Policies > Create Policy
Edit the policy template provided below as follows: replace
S3BUCKETARN
with the ARN of your Customer Hosted S3 Bucket. For example,
arn:aws:s3:::example-bucket-name
. Then, save the edited result as the JSON policy for your new IAM policy.
{
"Statement": [
{
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectAttributes"
],
"Effect": "Allow",
"Resource": "S3BUCKETARN/*"
},
{
"Action": [
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": "S3BUCKETARN"
}
],
"Version": "2012-10-17"
}
Create S3 Read-Only Role
This IAM Role will be assumed by Darwinium to acquire read-only permissions to your S3 bucket.
Obtain your Read-only External ID from the Darwinium Portal in step 1. This is a secret value that is unique to you and must be stored securely.
Go to IAM > Roles > Create Role
Edit the policy template provided below as follows: replace
EXTERNALID
with the ExternalId obtained in Step1. And replace
DWNACCOUNTID
With the AccountID Darwinium will supply upon install.
Use the edited result as the custom trust policy for your new IAM Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DWNACCOUNTID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "EXTERNALID"
},
"ArnLike": {
"aws:PrincipalArn": "arn:aws:iam::DWNACCOUNTID:role/crossaccountassumers/*"
}
}
}
]
}
Next, on the Add Permissions menu, select the S3 Read-Only policy that you created above.
Choose an appropriate name for the role, and note down the role’s ARN once created. It will look like
arn:aws:iam::123456789:role/s3readonlyrole
Create S3 Write-Only Policy
This IAM policy grants the write-only permissions that Darwinium needs to your S3 bucket.
Go to IAM > Policies > Create Policy
Edit the policy template provided below as follows: replace
S3BUCKETARN
with the ARN of your Customer Hosted S3 Bucket. For example,
arn:aws:s3:::example-bucket-name
. Then, save the edited result as the JSON policy for your new IAM policy.
{
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "S3BUCKETARN/*"
},
{
"Action": [
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": "S3BUCKETARN"
}
],
"Version": "2012-10-17"
}
Create S3 Write-Only Role
This IAM Role will be assumed by Darwinium to acquire write-only permissions to your S3 bucket.
Obtain your write-only account External ID from the Darwinium Portal (per step 1). This is a secret value that is unique to you and must be stored securely.
Go to IAM > Roles > Create Role
Edit the policy template provided below as follows: replace
EXTERNALID
with the ExternalId that Darwinium provided.
And replace
DWNACCOUNTID
With the AccountID Darwinium will supply upon install.
Use the edited result as the custom trust policy for your new IAM Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DWNACCOUNTID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "EXTERNALID"
},
"ArnLike": {
"aws:PrincipalArn": "arn:aws:iam::DWNACCOUNTID:role/crossaccountassumers/*"
}
}
}
]
}
Next, on the Add Permissions menu, select the S3 Write-Only policy that you created above.
Choose an appropriate name for the role, and note down the role’s ARN once created. It will look like
arn:aws:iam::123456789:role/s3writeonlyrole
Create S3 Bucket Policy if required (CloudFront only)
This section is only required if you are installing Darwinium on CloudFront and you are hosting your own PII S3 Bucket in a different AWS account to that with your CloudFront and Lambda@Edge functions.
Darwinium Lambda@Edge functions use the Lambda execution role to gain permissions to directly write to your S3 bucket. If the Lambda execution role is in a different AWS account to that containing your S3 bucket, you must configure the bucket policy on the S3 bucket itself to allow the execution role access. This is in addition to attaching the S3 write IAM policy as described above to the Lambda execution role. This assumes that you have set ‘Bucket owner enforced’ and S3 managed keys (SSE-S3) on your S3 bucket as recommended above.
Note that you should still create the main S3 read and write roles and policies in the AWS account with the S3 bucket. Also, you will need to create a duplicate of the S3 write IAM policy in your AWS account with Cloudfront, so that you can attach it to your Lambda execution role.
Go to your bucket in S3 > Permissions > Edit Bucket Policy
Edit the policy template provided below as follows: replace S3BUCKETARN with the ARN of your Customer Hosted S3 Bucket, and EXECUTIONROLEARN with the ARN of your Lambda execution role. Then, save the edited result as the bucket policy.
{
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "S3BUCKETARN/*",
"Principal": {
"AWS": "EXECUTIONROLEARN"
}
},
{
"Action": [
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": "S3BUCKETARN",
"Principal": {
"AWS": "EXECUTIONROLEARN"
}
}
],
"Version": "2012-10-17"
}
Please see this AWS documentation for more information: https://repost.aws/knowledge-center/cross-account-access-s3
Step 4- Enter S3 details into the Darwinium portal
Go back to the Darwinium portal where you received your external IDs earlier
Enter details for:
The S3 bucket AWS region
The S3 bucket name
Write-only Role ARN
Read-only ARN
Click "verify" to confirm that S3 credentials are working correctly. Note that your node's configuration cannot be saved until verification passes.