使用Terraform构建我想象中的最强AWS上的Springboot环境

※ 标题有点夸张

版本更新信息

2021年6月23日

名称更新前バージョン更新後バージョンOSMac OS 10.15.1 (Catalina)Mac OS 11.4 (Big Sur)Homebrew2.1.163.1.9tfenv1.0.22.2.1Terraform0.12.130.15.4AWS Provider2.34.03.44.0AWS CLIaws-cli/1.16.260 Python/3.7.5 Darwin/19.0.0 botocore/1.12.250aws-cli/2.0.48 Python/3.7.4 Darwin/20.5.0 exe/x86_64

为了达到的结果或目标。

希望尽快将使用Spring Boot + Gradle构建的Web应用程序发布到验证环境。
适用于不喜欢使用AWS管理控制台进行操作的人。

terraform-aws-Page-1.png

假定

    • 基本的に東京リージョンでの作業

 

    • AWSで新規IAMユーザー作成および権限付与できるアカウントを保持している

 

    • 上記アカウントでプログラム(AWS CLI)からアクセスできる

 

    Terraformを触ったことがある

环境信息

    • OS: Mac OS 11.4 (Big Sur)

Homebrew: 3.1.9

AWS CLI: aws-cli/2.0.48 Python/3.7.4 Darwin/20.5.0 exe/x86_64

tfenv: 2.2.1

Terraform: 0.15.4

目录结构

共同信息如VPC和存储库将在共享目录中进行管理。
另外,我们将使用工作区功能来切换验证环境和生产环境。
因为我们采用了每个环境差异较小的配置,所以使用工作区,但如果差异较大,可能需要进行一部分目录分割。

tree
.
├── .terraform/ # 自動生成
├── certs/ # pem等認証情報(コミットしない)
├── shared/ # 共通リソース
│       ├── .terraform/ # 自動生成
│       ├── .terraform.lock.hcl # 自動生成
│       ├── terraform.tf # terraform設定
│              ├── ・・・・・・ # リソースファイル
│       └── variables.tf # 変数宣言
├── terraform.tfstate.d/ # 自動生成
├── .terraform.lock.hcl # 自動生成
├── terraform.tf # terraform設定
├── ・・・・・・ # リソースファイル
├── terraform.tfvars # 変数設定
└── variables.tf # 変数宣言

预先准备

AWS命令行界面

安装

 

创建Terraform用户

$ aws iam create-user \
  --user-name terraform-sample
{
    "User": {
        "Path": "/",
        "UserName": "terraform-sample",
        "UserId": "XXXXXXXXXXXXXXXXXXXX",
        "Arn": "arn:aws:iam::XXXXXXXXXXXX:user/terraform-sample",
        "CreateDate": "YYYY-MM-DDTHH:mm:ssZ"
    }
}

授予权力

请授予管理员权限。根据环境的需要适当进行权限更改。
然而,由于所需权限涵盖范围广泛,因此最好在创建时仅授予AdministratorAccess,并在完成工作后解除。

$ aws iam attach-user-policy \
  --user-name terraform-sample \
  --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

创建和保存认证信息

$ aws iam create-access-key \
  --user-name terraform-sample
{
    "AccessKey": {
        "UserName": "terraform-sample",
        "AccessKeyId": "AAAAAAAAAAAAAAAAAAA",
        "Status": "Active",
        "SecretAccessKey": "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB",
        "CreateDate": "YYYY-MM-DDTHH:mm:ssZ"
    }
}
$ cat - << EOS >> ~/.aws/credentials
# プロファイル名
[terraform-sample]
region = ap-northeast-1 # 東京リージョン
aws_access_key_id = AAAAAAAAAAAAAAAAAAA
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
EOS

使用Terraform创建非管理资源

使用AWS CLI创建存储应用程序/静态内容的CodeCommit存储库和保存Terraform状态的S3存储桶,以防止其在执行terraform destroy等操作时被删除。

Terraform使用S3存储桶。

terraform-aws-Terraform用S3バケット.png
$ aws s3api create-bucket \
  --bucket terraform-sample-tfstate \
  --acl private \
  --region ap-northeast-1 \
  --create-bucket-configuration LocationConstraint=ap-northeast-1 \
  --profile terraform-sample
{
    "Location": "http://terraform-sample-tfstate.s3.amazonaws.com/"
}

启用版本控制

$ aws s3api put-bucket-versioning \
  --bucket terraform-sample-tfstate \
  --versioning-configuration Status=Enabled \
  --profile terraform-sample
$ aws s3api get-bucket-versioning \
  --bucket terraform-sample-tfstate \
  --profile terraform-sample
{
    "Status": "Enabled"
}

桶的加密

$ aws s3api put-bucket-encryption \
  --bucket terraform-sample-tfstate \
  --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}' \
  --profile terraform-sample
$ aws s3api get-bucket-encryption \
  --bucket terraform-sample-tfstate \
  --profile terraform-sample
{
    "ServerSideEncryptionConfiguration": {
        "Rules": [
            {
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "AES256"
                }
            }
        ]
    }
}

Terraform配置

安装tfenv

由於經常困擾於Terraform版本問題,故選擇使用名為”tfenv”的工具來進行版本管理。在Qiita上有一篇文章介紹如何使用tfenv進行Terraform版本管理。

$ brew install tfenv
$ which tfenv
/usr/local/bin/tfenv
$ tfenv --version
tfenv 2.2.1

安装/使用最新版的Terraform。

$ tfenv install latest
$ tfenv use latest
[INFO] Switching to v0.15.4
[INFO] Switching completed

对Terraform进行初始化

描述基本信息

您可以在以下页面中确认最新版本的AWS提供者:
Terraform AWS Provider CHANGELOG – GitHub
本次使用的是最新版本3.44.0,此为撰写时的版本。

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.44.0"
    }
  }
  backend "s3" {
    bucket  = "terraform-sample-tfstate" # 上記作成したTerraform用S3バケット名
    region  = "ap-northeast-1" # 作業対象のリージョン情報
    profile = "terraform-sample" # ~/.aws/credentialsに保存した認証情報のプロファイル名
    key     = "terraform.tfstate" # tfstateファイルパス
    encrypt = true
  }
}

provider "aws" {
  region                  = "ap-northeast-1"
  shared_credentials_file = "/Users/exotic-toybox/.aws/credentials" # ~/.aws/credentials
  profile                 = "terraform-sample"
}

将其复制到共享目录的子目录中。

$ cp terraform.tf shared/

只需要更改tfstate文件的路径。

terraform {
  required_providers {
    #-- 中略 --#
  }
  backend "s3" {
    #-- 中略 --#
-   key     = "terraform.tfstate" # tfstateファイルパス
+   key     = "shared/terraform.tfstate" # tfstateファイルパス
  }
}
#-- 中略 --#

执行初始化指令

$ terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 3.44.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

如果显示“Terraform 已成功初始化!”那就表示成功。
请同样在 shared 目录下执行。

关于变量的使用

在provider或terraform块中无法处理变量。
以下描述将导致错误。

variable "region" {
  default = "ap-northeast-1"
}

data "aws_s3_bucket" "tfstate" {
  bucket        = "terraform-sample-tfstate"
  #-- 中略 --#
}

terraform {
  #-- 中略 --#
  backend "s3" {
    bucket  = aws_s3_bucket.tfstate.bucket # これとか
    region  = var.region # これも
    #-- 中略 --#
  }
}
$ terraform init
Initializing the backend...
Error: Variables not allowed
  on terraform.tf line 18, in terraform:
  18:     bucket  = aws_s3_bucket.tfstate.bucket
  on terraform.tf line 19, in terraform:
  19:     region  = var.region
Variables may not be used here.

加载模块

加载共享目录下的资源。

data terraform_remote_state shared {
  backend = "s3"
 
  config = {
    bucket  = "terraform-sample-tfstate"
    key     = "shared/terraform.tfstate"
    region  = "ap-northeast-1"
    profile = "terraform-sample"
  }
}```


# 検証(Stage)環境構築
```tf:terraform.tfvars(新規作成)
app_name = "terraform-sample"
variable app_name {}

请在shared/variables.tf文件中进行相同的记录。
从此以后,一旦向terraform.tfvars文件中添加了变量,请在variables.tf和shared/variables.tf中声明该变量。

使用Terraform管理以外的CodeCommit存储库。

Note: The given phrase seems to be a mix of Japanese and English, so the translation might not be a direct equivalent.

用于静态内容的CodeCommit存储库

static-content (1).png
$ aws codecommit create-repository \
  --repository-name terraform-sample-static-contents \
  --repository-description "static contents repository" \
  --profile terraform-sample
{
    "repositoryMetadata": {
        "accountId": "XXXXXXXXXXXX",
        "repositoryId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
        "repositoryName": "terraform-sample-static-contents",
        "repositoryDescription": "static contents repository",
        "lastModifiedDate": XXXXXXXXXX.XX,
        "creationDate": XXXXXXXXXX.XX,
        "cloneUrlHttp": "https://git-codecommit.ap-northeast-1.amazonaws.com/v1/repos/terraform-sample-static-contents",
        "cloneUrlSsh": "ssh://git-codecommit.ap-northeast-1.amazonaws.com/v1/repos/terraform-sample-static-contents",
        "Arn": "arn:aws:codecommit:ap-northeast-1:XXXXXXXXXXXX:terraform-sample-static-contents"
    }
}

使用Terraform进行变量化。

data aws_codecommit_repository static_contents {
  repository_name = "terraform-sample-static-contents"
}

output codecommit_repository_static_contents {
  value = data.aws_codecommit_repository.static_contents
}

内部内容

将静态内容,如index.html等文件进行配置。

tree
.
├── error.html
├── favicon.png
└── index.html

我們先創建一個stage分支。

应用程序用的CodeCommit仓库

application.png
$ aws codecommit create-repository \
  --repository-name terraform-sample-application-sources \
  --repository-description "application sources repository" \
  --profile terraform-sample
{
    "repositoryMetadata": {
        "accountId": "XXXXXXXXXXXX",
        "repositoryId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
        "repositoryName": "terraform-sample-application-sources",
        "repositoryDescription": "application sources repository",
        "lastModifiedDate": XXXXXXXXXX.XX,
        "creationDate": XXXXXXXXXX.XX,
        "cloneUrlHttp": "https://git-codecommit.ap-northeast-1.amazonaws.com/v1/repos/terraform-sample-application-sources",
        "cloneUrlSsh": "ssh://git-codecommit.ap-northeast-1.amazonaws.com/v1/repos/terraform-sample-application-sources",
        "Arn": "arn:aws:codecommit:ap-northeast-1:XXXXXXXXXXXX:terraform-sample-application-sources"
    }
}

使用Terraform进行变量化。

#-- 中略 --#
data aws_codecommit_repository application_sources {
  repository_name = "terraform-sample-application-sources"
}

output codecommit_repository_application_sources {
  value = data.aws_codecommit_repository.application_sources
}

Inner content

将带有@SpringBootApplication注解的主类分别放置在admin和user两个Gradle项目内。

tree
.
├── admin
│   ├── bin
│   ├── build
│   ├── build.gradle
│   └── src
│       ├── main
│       │   ├── java
│       │   │   └── com
│       │   │       └── example
│       │   │           └── admin
│       │   │               └── AdminApplication.java
│       │   └── resources
│       │       └── application.yaml
│       └── test
│           ├── java
│           └── resources
├── appspec.yml
├── build.gradle
├── buildspec_admin.yml
├── buildspec_user.yml
├── data
│   └── script
│       ├── after_install.sh
│       ├── application_start.sh
│       ├── application_stop.sh
│       └── before_install.sh
├── gradle
│   └── wrapper
│       ├── gradle-wrapper.jar
│       └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── settings.gradle
└── user
    ├── bin
    ├── build
    ├── build.gradle
    └── src
        ├── main
        │   ├── java
        │   │   └── com
        │   │       └── example
        │   │           └── user
        │   │               └── UserApplication.java
        │   └── resources
        │       └── application.yaml
        └── test
            └── resources

创建一个名为stage的分支。

假设在创建和更新.tf文件时,执行”terraform apply”命令。此外,在shared目录下创建.tf文件时,执行”shared$ terraform apply -var-file=../terraform.tfvars”命令。

自动部署静态内容

terraform-aws-静的コンテンツの自動デプロイ.png

部署目标的S3存储桶

terraform-aws-静的コンテンツの自動デプロイ-デプロイ先のS3バケット.png
resource "aws_s3_bucket" "static_contents" {
  bucket = "${var.app_name}-static-contents-${terraform.workspace}"
  acl    = "private"

  tags = {
    Name = "${var.app_name}-static-contents-${terraform.workspace}"
  }
}

data "aws_iam_policy_document" "s3_static_contents" {
  statement {
    effect = "Allow"
    actions = ["s3:GetObject"]

    resources = [
      aws_s3_bucket.static_contents.arn,
      "${aws_s3_bucket.static_contents.arn}/*"
    ]
  }
}

存储CodePipeline的构件的S3存储桶

terraform-aws-静的コンテンツの自動デプロイ-CodePipelineのアーティファクトを格納するS3バケット.png
resource aws_s3_bucket codepipeline_static_contents {
  bucket = "${var.app_name}-codepipeline-static-contents-${terraform.workspace}"
  acl    = "private"

  tags = {
    Name = "${var.app_name}-codepipeline-static-contents-${terraform.workspace}"
  }
}

data aws_iam_policy_document s3_codepipeline_static_contents {
  statement {
    effect = "Allow"
    actions = [
      "s3:GetObject",
      "s3:GetObjectVersion",
      "s3:GetBucketVersioning",
      "s3:PutObject"
    ]

    resources = [
      aws_s3_bucket.codepipeline_static_contents.arn,
      "${aws_s3_bucket.codepipeline_static_contents.arn}/*"
    ]
  }
}

用于S3数据加密的密钥管理服务 (KMS)

terraform-aws-静的コンテンツの自動デプロイ-S3暗号化用kms.png
resource aws_kms_key kms {}

resource aws_kms_alias kms {
  name          = "alias/${var.app_name}"
  target_key_id = aws_kms_key.kms.key_id
}

output kms_alias_arn {
  value = aws_kms_alias.kms.arn
}
data aws_iam_policy_document kms {
  statement {
    effect  = "Allow"
    actions = ["kms:*"]

    resources = ["*"]
  }
}

output kms_policy_json {
  value = data.aws_iam_policy_document.kms.json
}

建立政策

data aws_iam_policy_document codepipeline_assume_role {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      identifiers = [
        "codepipeline.amazonaws.com",
        "events.amazonaws.com"
      ]
      type = "Service"
    }
  }
}

output codepipeline_assume_role_policy_json {
  value = data.aws_iam_policy_document.codepipeline_assume_role.json
}
data aws_iam_policy_document codecommit_static_contents {
  statement {
    effect = "Allow"
    actions = [
      "codecommit:GitPull",
      "codecommit:GetBranch",
      "codecommit:GetCommit",
      "codecommit:UploadArchive",
      "codecommit:GetUploadArchiveStatus"
    ]

    resources = [
      data.aws_codecommit_repository.static_contents.arn,
      "${data.aws_codecommit_repository.static_contents.arn}/*"
    ]
  }
}

output codecommit_static_contents_policy_json {
  value = data.aws_iam_policy_document.codecommit_static_contents.json
}

代码管道

terraform-aws-静的コンテンツの自動デプロイ-CodePipeline.png
resource aws_iam_role codepipeline_static_contents {
  name               = "${var.app_name}-codepipeline-static-contents-${terraform.workspace}"
  assume_role_policy = data.terraform_remote_state.shared.outputs.codepipeline_assume_role_policy_json
}

# CodeCommitからソース取得を許可
resource aws_iam_role_policy codecommit_codepipeline_static_contents {
  role   = aws_iam_role.codepipeline_static_contents.name
  policy = data.aws_iam_policy_document.codecommit_static_contents.json
}

# CodePipelineのアーティファクト用S3を許可
resource aws_iam_role_policy s3_codepipeline_static_contents {
  role   = aws_iam_role.codepipeline_static_contents.name
  policy = data.aws_iam_policy_document.s3_codepipeline_static_contents.json
}

# CodePipelineで使用するS3用kmsを許可
resource aws_iam_role_policy kms_codepipeline_static_contents {
  role   = aws_iam_role.codepipeline_static_contents.name
  policy = data.terraform_remote_state.shared.outputs.kms_policy_json
}

# デプロイ先S3へのアクセス許可
resource aws_iam_role_policy s3_static_contents {
  role   = aws_iam_role.codepipeline_static_contents.name
  policy = data.aws_iam_policy_document.s3_static_contents.json
}

resource aws_codepipeline static_contents {
  name     = "${var.app_name}-static-contents-${terraform.workspace}"
  role_arn = aws_iam_role.codepipeline_static_contents.arn

  artifact_store {
    location = aws_s3_bucket.codepipeline_static_contents.bucket
    type     = "S3"
    encryption_key {
      id   = data.terraform_remote_state.shared.outputs.kms_alias_arn
      type = "KMS"
    }
  }

  # 静的コンテンツ用CodeCommitリポジトリからソースを取得し、CodePipelineのアーティファクトを格納するS3バケットに保存する
  stage {
    name = "${var.app_name}-static-contents-${terraform.workspace}-source"

    action {
      name             = "${var.app_name}-static-contents-${terraform.workspace}-source-action"
      category         = "Source"
      owner            = "AWS"
      provider         = "CodeCommit"
      version          = "1"
      output_artifacts = ["SOURCE"]

      configuration = {
        PollForSourceChanges = "false"
        RepositoryName       = data.terraform_remote_state.shared.outputs.codecommit_repository_static_contents.repository_name
        BranchName           = terraform.workspace
      }
    }
  }

  # CodePipelineのアーティファクトを格納するS3バケットから上記stageの結果を取得し、デプロイ先のS3バケットに展開する
  stage {
    name = "${var.app_name}-static-contents-${terraform.workspace}-deploy"

    action {
      name            = "${var.app_name}-static-contents-${terraform.workspace}-deploy-action"
      category        = "Deploy"
      owner           = "AWS"
      provider        = "S3"
      input_artifacts = ["SOURCE"]
      version         = "1"

      configuration = {
        BucketName = aws_s3_bucket.static_contents.id,
        Extract    = true,
      }
    }
  }
}

确认动作

$ aws codepipeline start-pipeline-execution \
  --name terraform-sample-static-contents-stage \
  --profile terraform-sample
{
    "pipelineExecutionId": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE"
}
$ aws codepipeline get-pipeline-execution \
  --pipeline-name terraform-sample-static-contents-stage \
  --pipeline-execution-id AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE \
  --query "pipelineExecution.status" \
  --profile terraform-sample
"Succeeded"

如果成功了,一切正常。

云监控

terraform-aws-静的コンテンツの自動デプロイ-CloudWatch.png
data aws_iam_policy_document cloudwatch_assume_role {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      identifiers = [
        "codepipeline.amazonaws.com",
        "events.amazonaws.com"
      ]
      type = "Service"
    }
  }
}

output cloudwatch_assume_role_policy_json {
  value = data.aws_iam_policy_document.cloudwatch_assume_role.json
}
resource aws_iam_role codepipeline_static_contents_cloudwatch {
  name               = "${var.app_name}-codepipeline-static-contents-cloudwatch-${terraform.workspace}"
  assume_role_policy = data.terraform_remote_state.shared.outputs.cloudwatch_assume_role_policy_json
}

# resourcesに上記で作成したCodePipelineを指定するので、ここで宣言
data aws_iam_policy_document codepipeline_static_contents_cloudwatch {
  statement {
    effect  = "Allow"
    actions = ["codepipeline:StartPipelineExecution"]

    resources = [aws_codepipeline.static_contents.arn]
  }
}

resource aws_iam_role_policy codepipeline_static_contents_cloudwatch {
  role   = aws_iam_role.codepipeline_static_contents_cloudwatch.name
  policy = data.aws_iam_policy_document.codepipeline_static_contents_cloudwatch.json
}

resource aws_cloudwatch_event_rule codepipeline_static_contents {
  name          = "${var.app_name}-codepipeline-static-contents-${terraform.workspace}"
  # var.static_contents_repository_arnのvar.static_contents_target_branchに変更が発生したら発火する
  event_pattern = <<PATTERN
  {
    "source": [
      "aws.codecommit"
    ],
    "detail-type": [
      "CodeCommit Repository State Change"
    ],
    "resources": [
      "${data.terraform_remote_state.shared.outputs.codecommit_repository_static_contents.arn}"
    ],
    "detail": {
      "event": [
        "referenceCreated",
        "referenceUpdated"
      ],
      "referenceType": [
        "branch"
      ],
      "referenceName": [
        "${terraform.workspace}"
      ]
    }
  }
PATTERN
}

resource aws_cloudwatch_event_target codepipeline_static_contents {
  rule      = aws_cloudwatch_event_rule.codepipeline_static_contents.name
  target_id = "${var.app_name}-codepipeline-static-contents-${terraform.workspace}"
  arn       = aws_codepipeline.static_contents.arn
  role_arn  = aws_iam_role.codepipeline_static_contents_cloudwatch.arn
}

确认行动

    1. 使用var.static_contents_target_branch指定的分支将更改推送到。

 

    请使用以下命令进行确认。
$ aws codepipeline list-pipeline-executions \
  --pipeline-name terraform-sample-static-contents-stage \
  --profile terraform-sample
{
    "pipelineExecutionSummaries": [
        {
            "pipelineExecutionId": "YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY",
            "status": "Succeeded",
            "startTime": BBBBBBBBBB.BBB,
            "lastUpdateTime": BBBBBBBBBB.BBB,
            "sourceRevisions": [
                {
                    "actionName": "terraform-sample-static-contents-stage-source-action",
                    "revisionId": "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB",
                    "revisionSummary": "Test commit",
                    "revisionUrl": "https://ap-northeast-1.console.aws.amazon.com/codecommit/home#/repository/terraform-sample-static-contents/commit/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"
                }
            ],
            "trigger": {
                "triggerType": "CloudWatchEvent",
                "triggerDetail": "arn:aws:events:ap-northeast-1:338927112236:rule/terraform-sample-codepipeline-static-contents-stage"
            }
        },
        {
            "pipelineExecutionId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
            "status": "Succeeded",
            "startTime": AAAAAAAAAA.AAA,
            "lastUpdateTime": AAAAAAAAAA.AAA,
            "sourceRevisions": [
                {
                    "actionName": "terraform-sample-static-contents-stage-source-action",
                    "revisionId": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
                    "revisionSummary": "Initial commit",
                    "revisionUrl": "https://ap-northeast-1.console.aws.amazon.com/codecommit/home#/repository/terraform-sample-static-contents/commit/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
                }
            ],
            "trigger": {
                "triggerType": "StartPipelineExecution",
                "triggerDetail": "arn:aws:iam::XXXXXXXXXXXX:user/terraform-sample"
            }
        }
    ]
}

如果”triggerType”: “CloudWatchEvent”的执行历史状态为”Succeeded”,则表示正常。

将静态内容转化为网站

terraform-aws-静的コンテンツのWebサイト化.png

将S3转化为网站。

resource aws_s3_bucket static_contents {
  bucket = "${var.app_name}-static-contents-${terraform.workspace}"
  acl    = "private"

+ website {
+   index_document = "index.html"
+   error_document = "error.html"
+ }

  tags = {
    Name = "${var.app_name}-static-contents-${terraform.workspace}"
  }
}
#-- 中略 --#

云前缘

域名设置

#-- 省略 --#
user_domain = "app.example.com"
admin_domain = "admin.app.example.com"

域名证书

如果您在AWS上独立管理域名(有在Route53中的托管区域),那么…

使用terraform获取主机区域信息。

data aws_route53_zone route53-zone {
  name         = "example.com."
  private_zone = false
}

output route53-zone {
  value = data.aws_route53_zone.route53-zone
}

在使用ACM创建证书时,需要将注册的区域设置为***弗吉尼亚北部(us-east-1)***。由于无法使用证书资源来指定区域,因此我们可以使用terraform的默认参数provider来指定区域。

#-- 中略 --#
provider "aws" {
  alias  = "virginia"
  region = "us-east-1"
}
resource aws_acm_certificate user_cert {
  domain_name       = "${terraform.workspace == "production" ? "" : terraform.workspace}${var.user_domain}"
  validation_method = "DNS"
  provider          = aws.virginia
}

resource aws_route53_record user_cert_validation {
  zone_id = data.terraform_remote_state.shared.outputs.route53-zone.zone_id
  name    = tolist(aws_acm_certificate.user_cert.domain_validation_options)[0].resource_record_name
  type    = tolist(aws_acm_certificate.user_cert.domain_validation_options)[0].resource_record_type
  records = [tolist(aws_acm_certificate.user_cert.domain_validation_options)[0].resource_record_value]
  ttl     = 60
}

resource aws_acm_certificate_validation user_cert {
  certificate_arn         = aws_acm_certificate.user_cert.arn
  validation_record_fqdns = [aws_route53_record.user_cert_validation.fqdn]
  provider                = aws.virginia
}

resource aws_acm_certificate admin_cert {
  domain_name       = "${terraform.workspace == "production" ? "" : terraform.workspace}${var.admin_domain}"
  validation_method = "DNS"
  provider          = aws.virginia
}

resource aws_route53_record admin_cert_validation {
  zone_id = data.terraform_remote_state.shared.outputs.route53-zone.zone_id
  name    = tolist(aws_acm_certificate.admin_cert.domain_validation_options)[0].resource_record_name
  type    = tolist(aws_acm_certificate.admin_cert.domain_validation_options)[0].resource_record_type
  records = [tolist(aws_acm_certificate.admin_cert.domain_validation_options)[0].resource_record_value]
  ttl     = 60
}

resource aws_acm_certificate_validation admin_cert {
  certificate_arn         = aws_acm_certificate.admin_cert.arn
  validation_record_fqdns = [aws_route53_record.admin_cert_validation.fqdn]
  provider                = aws.virginia
}
如果未在AWS上管理一个独立的域名(创建自签名证书)的情况下。

根据Qiita上的指南,使用AWS证书管理器导入自签名证书,然后创建证书。由于实际上是通过自动分配的XXXXXXXXXXXXXXXXXX.cloudfront.net进行HTTPS访问,因此可以使用任何有效的域名进行验证。

$ mkdir certs
$ cd certs

# ルート証明書
certs$ openssl genrsa -out root.key -des3 2048
Enter pass phrase for root.key:パスフレーズ
Verifying - Enter pass phrase for root.key:パスフレーズ
certs$ openssl req -new -x509 -key root.key -sha256 -days 3650 -out root.pem -subj "/C=JP/ST=Tokyo/O=example corp./CN=example root 2020"
Enter pass phrase for root.key:パスフレーズ

# 中間CA証明書
certs$ openssl genrsa -out intermediate-ca.key -des3 2048
Enter pass phrase for intermediate-ca.key:パスフレーズ
Verifying - Enter pass phrase for intermediate-ca.key:パスフレーズ
certs$ openssl req -new -key intermediate-ca.key -sha256 -outform PEM -keyform PEM -out intermediate-ca.csr -subj "/C=JP/ST=Tokyo/O=example corp./CN=example Inter CA 2020"
Enter pass phrase for intermediate-ca.key:パスフレーズ
certs$ cat - << EOS >> openssl-sign-intermediate-ca.conf
[ v3_ca ]
basicConstraints = CA:true, pathlen:0
keyUsage = cRLSign, keyCertSign
nsCertType = sslCA, emailCA
EOS
certs$ openssl x509 -extfile openssl-sign-intermediate-ca.conf -req -in intermediate-ca.csr -sha256 -CA root.pem -CAkey root.key -set_serial 01 -extensions v3_ca -days 3650 -out intermediate-ca.pem
Enter pass phrase for root.key:パスフレーズ

# サーバ証明書
certs$ openssl genrsa 2048 > server.key
certs$ openssl req -new -key server.key -outform PEM -keyform PEM -sha256 -out server.csr -subj "/C=JP/ST=Tokyo/O=example corp./CN=vpn.example.com"
certs$ openssl x509 -req -in server.csr -sha256 -CA intermediate-ca.pem -CAkey intermediate-ca.key -set_serial 01 -days 3650 -out server.pem
Enter pass phrase for intermediate-ca.key:パスフレーズ
使用AWS CLI进行注册。

使用AWS CLI进行导入-按照AWS Certificate Manager的指示,在AWS CLI中进行注册。
在此期间,注册的区域需要设置为***北弗吉尼亚(us-east-1)***。

$ aws acm import-certificate \
  --profile terraform-sample \
  --region us-east-1 \
  --certificate fileb://server.pem \
  --certificate-chain fileb://intermediate-ca.pem \
  --private-key fileb://server.key
{
    "CertificateArn": "arn:aws:acm:us-east-1:XXXXXXXXXXXX:certificate/abcdefghijklmnopqrstuvwxyz"
}
在Terraform中进行变量化处理
#-- 中略 --#
acm_certificate_arn = "arn:aws:acm:us-east-1:XXXXXXXXXXXX:certificate/abcdefghijklmnopqrstuvwxyz"

来源访问身份

resource aws_cloudfront_origin_access_identity oai {
  comment = var.app_name
}

output cloudfront_origin_access_identity {
  value = aws_cloudfront_origin_access_identity.oai
}

S3 日志

variable "app_name" {}
variable "cloudfront_origin_access_identity_iam_arn" {}

resource "aws_s3_bucket" "logs" {
  bucket = "${var.app_name}-logs"
  acl    = "private"

  tags = {
    Name = "${var.app_name}-logs"
  }
}

output "logs" {
  value = aws_s3_bucket.logs
}

data "aws_iam_policy_document" "s3_logs" {
  statement {
    effect    = "Allow"
    actions   = ["s3:PutObject"]
    resources = ["${aws_s3_bucket.logs.arn}/*"]

    principals {
      type        = "AWS"
      identifiers = [var.cloudfront_origin_access_identity_iam_arn]
    }
  }
}

resource "aws_s3_bucket_policy" "logs" {
  bucket = aws_s3_bucket.logs.id
  policy = data.aws_iam_policy_document.s3_logs.json
}
#-- 中略 --#
module "s3" {
  source                                    = "../modules/s3"
  app_name                                  = var.app_name
  cloudfront_origin_access_identity_iam_arn = module.cloudfront.origin_access_identity.iam_arn
}

管理员

resource "aws_cloudfront_distribution" "admin" {
  enabled             = true
  comment             = var.admin_domain
  default_root_object = "index.html"

  origin {
    origin_id   = "s3-${var.admin_domain}"
    domain_name = aws_s3_bucket.static_contents.bucket_domain_name

    s3_origin_config {
      origin_access_identity = module.cloudfront.origin_access_identity.cloudfront_access_identity_path
    }
  }

  default_cache_behavior {
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    acm_certificate_arn      = var.acm_certificate_arn
    ssl_support_method       = "sni-only"
    minimum_protocol_version = "TLSv1"
  }

  logging_config {
    bucket          = module.s3.logs.bucket_domain_name
    prefix          = "${terraform.workspace}/cloudfront/admin"
    include_cookies = false
  }

  tags = {
    Name = var.admin_domain
  }
}

允许从CloudFront访问静态内容S3。

#-- 中略 --#
data "aws_iam_policy_document" "cloudfront_s3_static_contents" {
  statement {
    effect    = "Allow"
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.static_contents.arn}/*"]

    principals {
      type        = "AWS"
      identifiers = [module.cloudfront.origin_access_identity.iam_arn]
    }
  }
}

resource "aws_s3_bucket_policy" "static_contents" {
  bucket = aws_s3_bucket.static_contents.id
  policy = data.aws_iam_policy_document.cloudfront_s3_static_contents.json
}

应用程序环境

VPC 可以用中文翻译为虚拟私有网络。

#-- 中略 --#
vpc_cidr_block = "10.1.0.0/16"
#-- 中略 --#
variable "vpc_cidr_block" {}
variable "app_name" {}
variable "vpc_cidr_block" {}

resource "aws_vpc" "vpc" {
  cidr_block           = var.vpc_cidr_block
  instance_tenancy     = "default"
  enable_dns_support   = "true"
  enable_dns_hostnames = "true"

  tags = {
    Name = var.app_name
  }
}

output "id" {
  value = aws_vpc.vpc.id
}
#-- 中略 --#
module "vpc" {
  source         = "../modules/vpc"
  app_name       = var.app_name
  vpc_cidr_block = var.vpc_cidr_block
}

踏板服务器

创建用于访问应用程序服务器和数据库的跳板服务器。

子网

为了能够在互联网上进行公共访问,我们也将创建相关资源。

#-- 中略 --#
availability_zone_a = "ap-northeast-1a"

public_cidr_block_a = "10.1.1.0/24"
#-- 中略 --#
variable "availability_zone_a" {}

variable "public_cidr_block_a" {}
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = var.app_name
  }
}
variable "app_name" {}
variable "vpc_id" {}
variable "public_cidr_block_a" {}
variable "availability_zone_a" {}

resource "aws_subnet" "public_a" {
  vpc_id            = var.vpc_id
  cidr_block        = var.public_cidr_block_a
  availability_zone = var.availability_zone_a

  tags = {
    Name = "${var.app_name}-public-a"
  }
}

output "public_a_id" {
  value = aws_subnet.public_a.id
}
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = {
    Name = "${var.app_name}-public"
  }
}

resource "aws_route_table_association" "public_a" {
  subnet_id      = aws_subnet.public_a.id
  route_table_id = aws_route_table.public.id
}
#-- 中略 --#
module "subnet" {
  source              = "../modules/subnet"
  app_name            = var.app_name
  vpc_id              = module.vpc.id
  availability_zone_a = var.availability_zone_a
  public_cidr_block_a = var.public_cidr_block_a
}

安全群组

为了能够通过堆栈服务器进行SSH访问,将开放22号端口。
还可以只允许来自固定IP的SSH访问,以确保只有在家或公司内部才能进行访问。

variable "app_name" {}
variable "vpc_id" {}

resource "aws_security_group" "jump" {
  name   = "${var.app_name}-jump"
  vpc_id = var.vpc_id

  tags = {
    Name = "${var.app_name}-jump"
  }
}

resource "aws_security_group_rule" "jump_ssh" {
  security_group_id = aws_security_group.jump.id
  type              = "ingress"
  from_port         = 22
  to_port           = 22
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"] # 固定IPからアクセスする場合はここで指定
}

output "jump_id" {
  value = aws_security_group.jump.id
}
#-- 中略 --#
module "security_group" {
  source   = "../modules/security_group"
  app_name = var.app_name
  vpc_id   = module.vpc.id
}

只需要一种选项:像Qiita上的Amazon Linux 2 保护 SSH那样,如果修改了用于 SSH 的端口号,请确保更新 Terraform。(例如:将其更改为51921)。

#-- 省略 --#
resource "aws_security_group_rule" "jump_ssh" {
  security_group_id = aws_security_group.jump.id
  type              = "ingress"
- from_port         = 22
+ from_port         = 51921
- to_port           = 22
+ to_port           = 51921
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
}

EC2 可以被理解为亚马逊弹性计算云服务。

获取Amazon Linux 2的最新AMI ID

我們將參考 DevelopersIO 的方法,使用 CloudFormation 獲取最新的 Amazon Linux 2 的 AMI ID 並建立 EC2。

$ aws ssm get-parameter \
  --name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 \
  --region ap-northeast-1 \
  --query "Parameter.Value" \
  --profile terraform-sample
"ami-0064e711cbc7a825e"
创建用于访问EC2的密钥对。

为了防止因误操作导致删除,我们将参考Amazon EC2密钥对的创建、显示和删除 – AWS身份和访问管理来在Terraform管理之外创建。

$ aws ec2 create-key-pair \
  --key-name terraform-sample-jump \
  --query 'KeyMaterial' \
  --output text \
  --profile terraform-sample > certs/terraform-sample-jump.pem
$ chmod 400 certs/terraform-sample-jump.pem
云计算实例 EC2
variable "app_name" {}
variable "availability_zone_a" {}
variable "jump_key_name" {}
variable "subnet_id" {}
variable "jump_security_group_id" {}

resource "aws_instance" "jump" {
  ami                    = "ami-011facbea5ec0363b" # 最新AMI ID
  instance_type          = "t2.micro"
  availability_zone      = var.availability_zone_a
  key_name               = var.jump_key_name
  monitoring             = "false"
  subnet_id              = var.subnet_id
  vpc_security_group_ids = [var.jump_security_group_id]

  tags = {
    Name = "${var.app_name}-jump"
  }
}
#-- 中略 --#
module "ec2" {
  source                 = "../modules/ec2"
  app_name               = var.app_name
  availability_zone_a    = var.availability_zone_a
  jump_key_name          = var.jump_key_name
  subnet_id              = module.subnet.public_a_id
  jump_security_group_id = module.security_group.jump_id
}
弹性IP
#-- 中略 --#
resource "aws_eip" "jump" {
  vpc = true

  tags = {
    Name = "${var.app_name}-jump"
  }
}

resource "aws_eip_association" "jump" {
  allocation_id = aws_eip.jump.id
  instance_id   = aws_instance.jump.id
}
获取跳板服务器的公共IP地址
$ aws ec2 describe-instances \
  --filter "Name=tag:Name,Values=terraform-sample-jump" \
  --query "Reservations[0].Instances[0].PublicIpAddress" \
  --profile terraform-sample
"XXX.XXX.XXX.XXX"
请确认连接。

如果能够按照以下方式连接,就会达到正确的状态。

$ ssh -i certs/terraform-sample-jump.pem ec2-user@XXX.XXX.XXX.XXX
The authenticity of host 'XXX.XXX.XXX.XXX (XXX.XXX.XXX.XXX)' can't be established.
ECDSA key fingerprint is SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'XXX.XXX.XXX.XXX' (ECDSA) to the list of known hosts.

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-XXX-XXX-XXX-XXX ~]$ 

确定要继续连接吗?请键入“是”或“否”。

使用CodePipeline和CodeBuild实现持续集成。

存储CodePipeline的Artifact的S3存储桶

#-- 中略 --#
resource "aws_s3_bucket" "codepipeline_application_sources" {
  bucket = "${var.app_name}-codepipeline-application-sources-${terraform.workspace}"
  acl    = "private"

  tags = {
    Name = "${var.app_name}-codepipeline-application-sources-${terraform.workspace}"
  }
}

data "aws_iam_policy_document" "s3_codepipeline_application_sources" {
  statement {
    effect = "Allow"
    actions = [
      "s3:GetObject",
      "s3:GetObjectVersion",
      "s3:GetBucketVersioning",
      "s3:PutObject"
    ]

    resources = [
      aws_s3_bucket.codepipeline_application_sources.arn,
      "${aws_s3_bucket.codepipeline_application_sources.arn}/*"
    ]
  }
}

CodeCommit的访问权限

#-- 中略 --#
variable "application_sources_repository_arn" {}

data "aws_iam_policy_document" "codecommit_application_sources" {
  statement {
    effect = "Allow"
    actions = [
      "codecommit:GitPull",
      "codecommit:GetBranch",
      "codecommit:GetCommit",
      "codecommit:UploadArchive",
      "codecommit:GetUploadArchiveStatus"
    ]

    resources = [
      var.application_sources_repository_arn,
      "${var.application_sources_repository_arn}/*"
    ]
  }
}

output "codecommit_application_sources_policy_json" {
  value = data.aws_iam_policy_document.codecommit_application_sources.json
}
#-- 中略 --#
module "iam" {
  source                             = "../modules/iam"
  static_contents_repository_arn     = var.static_contents_repository_arn
+ application_sources_repository_arn = var.application_sources_repository_arn
}
#-- 中略 --#

构建代码

variable "app_name" {}

data "aws_iam_policy_document" "codebuild_assume_role" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      identifiers = ["codebuild.amazonaws.com"]
      type        = "Service"
    }
  }
}

output "codebuild_assume_role_policy_json" {
  value = data.aws_iam_policy_document.codebuild_assume_role.json
}

data "aws_iam_policy_document" "codebuild" {
  statement {
    effect = "Allow"
    actions = [
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]
    resources = ["*"]
  }

  statement {
    effect = "Allow"
    actions = [
      "codebuild:StopBuild",
      "ec2:*"
    ]

    resources = ["*"]
  }
}

output "codebuild_policy_json" {
  value = data.aws_iam_policy_document.codebuild.json
}
#-- 中略 --#
module "iam" {
  source                             = "../modules/iam"
  static_contents_repository_arn     = var.static_contents_repository_arn
  application_sources_repository_arn = var.application_sources_repository_arn
+ app_name                           = var.app_name
}
#-- 中略 --#
resource "aws_iam_role" "codebuild_admin" {
  name               = "${var.app_name}-codebuild-admin"
  assume_role_policy = module.iam.codebuild_assume_role_policy_json
}

resource "aws_iam_role_policy" "codebuild_admin" {
  role   = aws_iam_role.codebuild_admin.name
  policy = module.iam.codebuild_policy_json
}

resource "aws_iam_role_policy" "s3_codebuild_admin" {
  role   = aws_iam_role.codebuild_admin.name
  policy = data.aws_iam_policy_document.s3_codepipeline_application_sources.json
}

resource "aws_iam_role_policy" "kms_codebuild_admin" {
  role   = aws_iam_role.codebuild_admin.name
  policy = module.iam.kms_policy_json
}

resource "aws_codebuild_project" "admin" {
  name          = "${var.app_name}-admin-${terraform.workspace}"
  description   = "${var.app_name}-admin-${terraform.workspace}"
  build_timeout = "15"
  service_role  = aws_iam_role.codebuild_admin.arn

  artifacts {
    type = "NO_ARTIFACTS"
  }

  cache {
    type = "LOCAL"
    modes = [
      "LOCAL_SOURCE_CACHE",
      "LOCAL_CUSTOM_CACHE"
    ]
  }

  environment {
    compute_type                = "BUILD_GENERAL1_SMALL"
    image                       = "aws/codebuild/standard:2.0"
    type                        = "LINUX_CONTAINER"
    image_pull_credentials_type = "CODEBUILD"
    privileged_mode             = true
  }

  logs_config {
    cloudwatch_logs {
      status      = "ENABLED"
      group_name  = "${var.app_name}-admin-${terraform.workspace}"
      stream_name = "${var.app_name}-admin-${terraform.workspace}"
    }
  }

  source {
    type            = "CODECOMMIT"
    buildspec       = "buildspec_admin.yml"
    git_clone_depth = 1
    location        = var.application_sources_repository_name
  }

  tags = {
    Name = "${var.app_name}-admin-${terraform.workspace}"
  }
}

data "aws_iam_policy_document" "codebuild_admin" {
  statement {
    effect = "Allow"
    actions = [
      "codebuild:BatchGetBuilds",
      "codebuild:StartBuild"
    ]

    resources = [aws_codebuild_project.admin.arn]
  }
}
建筑设置

在应用程序用的CodeCommit代码库中创建。

version: 0.2
phases:
  install:
    runtime-versions:
      docker: 18
    commands:
      - echo Install started on `date`
    finally:
      - echo Install completed on `date`
  pre_build:
    commands:
      - echo PreBuild started on `date`
      - ./gradlew clean test --info
    finally:
      - echo PreBuild completed on `date`
  build:
    commands:
      - echo Build started on `date`
    finally:
      - echo Build completed on `date`
  post_build:
    commands:
      - echo PostBuild started on `date`
    finally:
      - echo PostBuild completed on `date`

通过CodePipeline触发CodeBuild开始构建

#-- 中略 --#
variable "application_sources_target_branch" {
  default = "stage"
}
resource "aws_iam_role" "codepipeline_application_sources" {
  name               = "${var.app_name}-codepipeline-application-sources-${terraform.workspace}"
  assume_role_policy = module.iam.codepipeline_assume_role_policy_json
}

resource "aws_iam_role_policy" "codecommit_codepipeline_application_sources" {
  role   = aws_iam_role.codepipeline_application_sources.name
  policy = module.iam.codecommit_application_sources_policy_json
}

resource "aws_iam_role_policy" "s3_codepipeline_application_sources" {
  role   = aws_iam_role.codepipeline_application_sources.name
  policy = data.aws_iam_policy_document.s3_codepipeline_application_sources.json
}

resource "aws_iam_role_policy" "kms_codepipeline_application_sources" {
  role   = aws_iam_role.codepipeline_application_sources.name
  policy = module.iam.kms_policy_json
}

resource "aws_iam_role_policy" "codebuild_admin_codepipeline_application_sources" {
  role   = aws_iam_role.codepipeline_application_sources.name
  policy = data.aws_iam_policy_document.codebuild_admin.json
}

resource "aws_codepipeline" "application_sources" {
  name     = "${var.app_name}-application-sources-${terraform.workspace}"
  role_arn = aws_iam_role.codepipeline_application_sources.arn

  artifact_store {
    location = aws_s3_bucket.codepipeline_application_sources.bucket
    type     = "S3"
    encryption_key {
      id   = module.kms.alias_arn
      type = "KMS"
    }
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-source"

    action {
      name             = "${var.app_name}-application-sources-${terraform.workspace}-source-action"
      category         = "Source"
      owner            = "AWS"
      provider         = "CodeCommit"
      version          = "1"
      output_artifacts = ["SOURCE"]

      configuration = {
        PollForSourceChanges = "false"
        RepositoryName       = var.application_sources_repository_name
        BranchName           = var.application_sources_target_branch
      }
    }
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-build-admin"

    action {
      name             = "${var.app_name}-application-sources-${terraform.workspace}-build-admin-action"
      category         = "Build"
      owner            = "AWS"
      provider         = "CodeBuild"
      input_artifacts  = ["SOURCE"]
      output_artifacts = ["ADMIN_BUILD"]
      version          = "1"

      configuration = {
        ProjectName = aws_codebuild_project.admin.name
      }
    }
  }
}
确认行动
$ aws codepipeline start-pipeline-execution \
  --name terraform-sample-application-sources-stage \
  --profile terraform-sample
{
    "pipelineExecutionId": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE"
}
$ aws codepipeline get-pipeline-execution \
  --pipeline-name terraform-sample-application-sources-stage \
  --pipeline-execution-id AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE
  --query "pipelineExecution.status" \
  --profile terraform-sample
"Succeeded"

如果成功了,那就正常了。

检测到对CodeCommit存储库中目标分支的推送。

resource "aws_iam_role" "codepipeline_application_sources_cloudwatch" {
  name               = "${var.app_name}-codepipeline-application-sources-cloudwatch-${terraform.workspace}"
  assume_role_policy = module.iam.cloudwatch_assume_role_policy_json
}

data "aws_iam_policy_document" "codepipeline_application_sources_cloudwatch" {
  statement {
    effect  = "Allow"
    actions = ["codepipeline:StartPipelineExecution"]

    resources = [aws_codepipeline.application_sources.arn]
  }
}

resource "aws_iam_role_policy" "codepipeline_application_sources_cloudwatch" {
  role   = aws_iam_role.codepipeline_application_sources_cloudwatch.name
  policy = data.aws_iam_policy_document.codepipeline_application_sources_cloudwatch.json
}

resource "aws_cloudwatch_event_rule" "codepipeline_application_sources" {
  name          = "${var.app_name}-codepipeline-application-sources-${terraform.workspace}"
  event_pattern = <<PATTERN
  {
    "source": [
      "aws.codecommit"
    ],
    "detail-type": [
      "CodeCommit Repository State Change"
    ],
    "resources": [
      "${var.application_sources_repository_arn}"
    ],
    "detail": {
      "event": [
        "referenceCreated",
        "referenceUpdated"
      ],
      "referenceType": [
        "branch"
      ],
      "referenceName": [
        "${var.application_sources_target_branch}"
      ]
    }
  }
PATTERN
}

resource "aws_cloudwatch_event_target" "codepipeline_application_sources" {
  rule      = aws_cloudwatch_event_rule.codepipeline_application_sources.name
  target_id = "${var.app_name}-codepipeline-application-sources-${terraform.workspace}"
  arn       = aws_codepipeline.application_sources.arn
  role_arn  = aws_iam_role.codepipeline_application_sources_cloudwatch.arn
}
确认活动
    将更改推送到通过var.static_contents_target_branch指定的分支。请使用以下命令进行确认。
$ aws codepipeline list-pipeline-executions \
  --pipeline-name terraform-sample-application-sources-stage \
  --profile terraform-sample
{
    "pipelineExecutionSummaries": [
        {
            "pipelineExecutionId": "YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY",
            "status": "Succeeded",
            "startTime": BBBBBBBBBB.BBB,
            "lastUpdateTime": BBBBBBBBBB.BBB,
            "sourceRevisions": [
                {
                    "actionName": "terraform-sample-application-sources-stage-source-action",
                    "revisionId": "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB",
                    "revisionSummary": "Test commit",
                    "revisionUrl": "https://ap-northeast-1.console.aws.amazon.com/codecommit/home#/repository/terraform-sample-application-sources/commit/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"
                }
            ],
            "trigger": {
                "triggerType": "CloudWatchEvent",
                "triggerDetail": "arn:aws:events:ap-northeast-1:338927112236:rule/terraform-sample-codepipeline-application-sources-stage"
            }
        },
        {
            "pipelineExecutionId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
            "status": "Succeeded",
            "startTime": AAAAAAAAAA.AAA,
            "lastUpdateTime": AAAAAAAAAA.AAA,
            "sourceRevisions": [
                {
                    "actionName": "terraform-sample-application-sources-stage-source-action",
                    "revisionId": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
                    "revisionSummary": "Initial commit",
                    "revisionUrl": "https://ap-northeast-1.console.aws.amazon.com/codecommit/home#/repository/terraform-sample-static-contents/commit/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
                }
            ],
            "trigger": {
                "triggerType": "StartPipelineExecution",
                "triggerDetail": "arn:aws:iam::XXXXXXXXXXXX:user/terraform-sample"
            }
        }
    ]
}

如果触发类型为”CloudWatchEvent”的执行记录的状态为”Succeeded”,则表示正常。

应用程序服务器

子网

#-- 中略 --#
availability_zone_c  = "ap-northeast-1c"
private_cidr_block_a = "10.1.2.0/24"
private_cidr_block_c = "10.1.3.0/24"
#-- 中略 --#
variable "availability_zone_c" {}

variable "private_cidr_block_a" {}
variable "private_cidr_block_c" {}
variable "availability_zone_c" {}
variable "private_cidr_block_a" {}
variable "private_cidr_block_c" {}

resource "aws_subnet" "private_a" {
  vpc_id            = var.vpc_id
  cidr_block        = var.private_cidr_block_a
  availability_zone = var.availability_zone_a

  tags = {
    Name = "${var.app_name}-private-a"
  }
}

output "private_a_id" {
  value = aws_subnet.private_a.id
}

resource "aws_subnet" "private_c" {
  vpc_id            = var.vpc_id
  cidr_block        = var.private_cidr_block_c
  availability_zone = var.availability_zone_c

  tags = {
    Name = "${var.app_name}-private-c"
  }
}

output "private_c_id" {
  value = aws_subnet.private_c.id
}
#-- 中略 --#
module "subnet" {
  source               = "../modules/subnet"
  app_name             = var.app_name
  vpc_id               = module.vpc.id
  availability_zone_a  = var.availability_zone_a
+ availability_zone_c  = var.availability_zone_c
  public_cidr_block_a  = var.public_cidr_block_a
+ private_cidr_block_a = var.private_cidr_block_a
+ private_cidr_block_c = var.private_cidr_block_c
}
#-- 中略 --#

云端服务器 EC2 (Elastic Compute Cloud)

创建用于访问EC2的密钥对。
$ aws ec2 create-key-pair \
  --key-name terraform-sample-stage \
  --query 'KeyMaterial' \
  --output text \
  --profile terraform-sample > certs/terraform-sample-stage.pem
$ chmod 400 certs/terraform-sample-stage.pem
在踏板服务器上安装,并从本地删除。
$ ssh -i certs/terraform-sample-jump.pem ec2-user@XXX.XXX.XXX.XXX "mkdir ~/.ssh/pem"
$ scp -i certs/terraform-sample-jump.pem certs/terraform-sample-stage.pem ec2-user@XXX.XXX.XXX.XXX:~/.ssh/pem
$ ssh -i certs/terraform-sample-jump.pem ec2-user@XXX.XXX.XXX.XXX "chmod 600 ~/.ssh/pem/terraform-sample-stage.pem"
$ rm -f certs/terraform-sample-stage.pem
#-- 中略 --#
variable "key_name" {
  default = "terraform-sample-stage"
}
安全组

我们将创建一个安全组,该安全组仅允许来自跳板服务器的SSH访问。
此外,我们将添加出站规则,以便跳板服务器可以通过SSH访问 Stage 实例。

#-- 中略 --#
variable "vpc_cidr_block" {}

resource "aws_security_group_rule" "jump_ssh_out" {
  security_group_id = aws_security_group.jump.id
  type              = "egress"
  from_port         = 22
  to_port           = 22
  protocol          = "tcp"
  cidr_blocks       = [var.vpc_cidr_block]
}

resource "aws_security_group" "from_jump" {
  name   = "${var.app_name}-from-jump"
  vpc_id = var.vpc_id

  tags = {
    Name = "${var.app_name}-from-jump"
  }
}

resource "aws_security_group_rule" "from_jump_ssh" {
  security_group_id        = aws_security_group.from_jump.id
  type                     = "ingress"
  from_port                = 22
  to_port                  = 22
  protocol                 = "tcp"
  source_security_group_id = aws_security_group.jump.id
}

output "from_jump_id" {
  value = aws_security_group.from_jump.id
}
#-- 中略 --#
module "security_group" {
  source         = "../modules/security_group"
  app_name       = var.app_name
  vpc_id         = module.vpc.id
+ vpc_cidr_block = var.vpc_cidr_block
}
#-- 中略 --#
亚马逊云计算(EC2)实例

稍后,我们将创建一个实例作为ApplicationLoadBalancer的AutoScalingGroup的模板,并创建一个AMI。

resource "aws_instance" "admin" {
  ami                    = "ami-011facbea5ec0363b"
  instance_type          = "t2.small"
  availability_zone      = var.availability_zone_a
  key_name               = var.key_name
  monitoring             = "false"
  subnet_id              = module.subnet.private_a_id
  vpc_security_group_ids = [module.security_group.from_jump_id]

  tags = {
    Name = "${var.app_name}-admin-${terraform.workspace}"
  }
}
获取Stage服务器的私有IP地址。
$ aws ec2 describe-instances \
  --filter "Name=tag:Name,Values=terraform-sample-admin-stage" \
  --query "Reservations[0].Instances[0].PrivateIpAddress" \
  --profile terraform-sample
"YYY.YYY.YYY.YYY"
请确认连接

如果能够连接如下所示,则为正确状态。

$ ssh -i certs/terraform-sample-stage.pem ec2-user@XXX.XXX.XXX.XXX
[ec2-user@ip-XXX-XXX-XXX-XXX ~]$ ssh -i ~/.ssh/pem/terraform-sample-stage.pem ec2-user@YYY.YYY.YYY.YYY
The authenticity of host 'YYY.YYY.YYY.YYY (YYY.YYY.YYY.YYY)' can't be established.
ECDSA key fingerprint is SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXX
ECDSA key fingerprint is MD5:XXXXXXXXXXXXXXXXXXXXXXXXXX
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'YYY.YYY.YYY.YYY' (ECDSA) to the list of known hosts.

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-YYY-YYY-YYY-YYY ~]$

请确认您是否确定要继续连接(是/否)?请键入“是”以继续。

CodeDeploy代理

为了能够进行自动部署,我们将通过yum和wget安装agent来配置CodeDeploy。

使应用服务器能够连接到互联网。

通过在公共子网上设置NAT网关,并将其配置为私有子网的默认路由,将使得私有子网内的EC2实例能够访问互联网。
参考:
使用Terraform构建公共子网 · mzumi’s博客
使用Terraform构建私有子网 · mzumi’s博客

#-- 中略 --#
resource "aws_eip" "nat" {
  vpc = true

  tags = {
    Name = "${var.app_name}-nat"
  }
}

resource "aws_nat_gateway" "ngw" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public_a.id

  tags = {
    Name = var.app_name
  }
}
#-- 中略 --#
resource "aws_route_table" "private" {
  vpc_id = var.vpc_id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.ngw.id
  }

  tags = {
    Name = "${var.app_name}-private"
  }
}

resource "aws_route_table_association" "private_a" {
  subnet_id      = aws_subnet.private_a.id
  route_table_id = aws_route_table.private.id
}

resource "aws_route_table_association" "private_c" {
  subnet_id      = aws_subnet.private_c.id
  route_table_id = aws_route_table.private.id
}
创建一个设置了Yum和Wget的出站规则的安全组。

使EC2能够将 http、https 和 DNS 请求发送到互联网上。

resource "aws_security_group" "allow_internet" {
  name   = "${var.app_name}-allow-internet"
  vpc_id = var.vpc_id

  tags = {
    Name = "${var.app_name}-allow-internet"
  }
}

resource "aws_security_group_rule" "allow_internet_http" {
  security_group_id = aws_security_group.allow_internet.id
  type              = "egress"
  from_port         = 80
  to_port           = 80
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "allow_internet_https" {
  security_group_id = aws_security_group.allow_internet.id
  type              = "egress"
  from_port         = 443
  to_port           = 443
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "allow_internet_dns" {
  security_group_id = aws_security_group.allow_internet.id
  type              = "egress"
  from_port         = 53
  to_port           = 53
  protocol          = "udp"
  cidr_blocks       = ["0.0.0.0/0"]
}

output "allow_internet_id" {
  value = aws_security_group.allow_internet.id
}
向EC2添加安全组
resource "aws_instance" "admin" {
  ami                    = "ami-011facbea5ec0363b"
  instance_type          = "t2.small"
  availability_zone      = var.availability_zone_a
  key_name               = var.key_name
  monitoring             = "false"
  subnet_id              = module.subnet.private_a_id
- vpc_security_group_ids = [module.security_group.from_jump_id]
+ vpc_security_group_ids = [module.security_group.from_jump_id, module.security_group.allow_internet_id]

  tags = {
    Name = "${var.app_name}-admin-${terraform.workspace}"
  }
}
获取yum更新所需的包。

用管理员权限执行。

[root@ip-YYY-YYY-YYY-YYY ~]# yum upgrade -y
[root@ip-YYY-YYY-YYY-YYY ~]# yum install -y wget git ruby
[root@ip-YYY-YYY-YYY-YYY ~]# amazon-linux-extras install -y java-openjdk11

如果屏幕上显示“完成!”或“已完成!”,那就表示成功。

安装CodeDeploy代理
[root@ip-YYY-YYY-YYY-YYY ~]# wget https://aws-codedeploy-ap-northeast-1.s3.amazonaws.com/latest/install
[root@ip-YYY-YYY-YYY-YYY ~]# chmod +x ./install
[root@ip-YYY-YYY-YYY-YYY ~]# ./install auto

如果显示相应的”complete!”或者”完了!”,那就表示成功。

代码部署 (Daima BuShu)

将启动的SpringBoot应用程序注册到Systemctl中。

根据 Qiita 上的参考,我们将 /var/lib/springboot/boot.jar(通过 CodeDeploy 进行部署的可执行 Jar)注册为服务来运行 SpringBoot 应用程序的方法。

[root@ip-YYY-YYY-YYY-YYY ~]# adduser application # グループも同時に作成されます
[root@ip-YYY-YYY-YYY-YYY ~]# id application
uid=1001(application) gid=1001(application) groups=1001(application)
[root@ip-YYY-YYY-YYY-YYY ~]# mkdir /var/lib/springboot/
[root@ip-YYY-YYY-YYY-YYY ~]# mkdir /var/lib/springboot/logs
[root@ip-YYY-YYY-YYY-YYY ~]# chown -R application:application /var/lib/springboot/
[root@ip-YYY-YYY-YYY-YYY ~]# cat - << EOS >> /var/lib/springboot/boot.conf
export JAVA_OPTS="-Dspring.profiles.active=stage"
export LANG="ja_JP.utf8"
EOS
[root@ip-YYY-YYY-YYY-YYY ~]# cat - << EOS >> /etc/systemd/system/springboot.service
[Unit]
Description = springboot application
[Service]
ExecStart = /bin/sh -c 'java -jar /var/lib/springboot/boot.jar &>> /var/lib/springboot/logs/stage.log'
Restart = always
Type = simple
User = application
Group = application
SuccessExitStatus = 143
[Install]
WantedBy = multi-user.target
EOS
[root@ip-YYY-YYY-YYY-YYY ~]# systemctl enable springboot.service
脚本

请在应用程序的CodeCommit仓库中创建。
根据环境的不同,请添加相应的命令。

echo application stop
echo before install
systemctl stop springboot.service
rm -f /var/lib/springboot/boot.jar
echo after install
echo application start
systemctl start springboot.service
设定文件

在应用程序的CodeCommit代码库中进行创建。

version: 0.0
os: linux
files:
  - source: /boot.jar
    destination: /var/lib/springboot/
hooks:
  ApplicationStop:
    - location: /application_stop.sh
      timeout: 300
      runas: root
  BeforeInstall:
    - location: /before_install.sh
      timeout: 300
      runas: root
  AfterInstall:
    - location: /after_install.sh
      timeout: 300
      runas: root
  ApplicationStart:
    - location: /application_start.sh
      timeout: 300
      runas: root
将构建结果保存到S3中的CodeBuild。
#-- 省略 --#
  build:
    commands:
      - echo Build started on `date`
+     - ./gradlew admin:build -x test
    finally:
      - echo Build completed on `date`
  post_build:
    commands:
      - echo PostBuild started on `date`
+     - cp -p admin/build/libs/admin-0.0.1-SNAPSHOT.jar boot.jar
    finally:
      - echo PostBuild completed on `date`
+artifacts:
+ files:
+   - 'boot.jar'
+   - 'appspec.yml'
+   - 'data/script/*'
+ discard-paths: yes
CodeDeploy应用程序

KEY: Deploy, Value: 希望将带有${var.app_name}-admin-${terraform.workspace}标签的EC2部署为目标。

resource "aws_instance" "admin" {
  ami                    = "ami-011facbea5ec0363b"
  instance_type          = "t2.small"
  availability_zone      = var.availability_zone_a
  key_name               = var.key_name
  monitoring             = "false"
  subnet_id              = module.subnet.private_a_id
  vpc_security_group_ids = [module.security_group.from_jump_id, module.security_group.allow_internet_id]

  tags = {
    Name   = "${var.app_name}-admin-${terraform.workspace}"
+   Deploy = "${var.app_name}-admin-${terraform.workspace}"
  }
}
data "aws_iam_policy_document" "codedeploy_assume_role" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      identifiers = ["codedeploy.amazonaws.com"]
      type        = "Service"
    }
  }
}

resource "aws_iam_role" "codedeploy" {
  name               = var.app_name
  assume_role_policy = data.aws_iam_policy_document.codedeploy_assume_role.json
}

output "codedeploy_role_arn" {
  value = aws_iam_role.codedeploy.arn
}

data "aws_iam_policy_document" "codedeploy" {
  statement {
    effect = "Allow"
    actions = [
      "ec2:DescribeInstances",
      "ec2:DescribeInstanceStatus",
      "tag:GetTags",
      "tag:GetResources"
    ]
    resources = ["*"]
  }
}

resource "aws_iam_role_policy" "codedeploy" {
  role   = aws_iam_role.codedeploy.name
  policy = data.aws_iam_policy_document.codedeploy.json
}
resource "aws_codedeploy_app" "admin" {
  name             = "${var.app_name}-admin-${terraform.workspace}"
  compute_platform = "Server"
}

resource "aws_codedeploy_deployment_group" "admin" {
  deployment_group_name  = "${var.app_name}-admin-${terraform.workspace}"
  app_name               = aws_codedeploy_app.admin.name
  deployment_config_name = "CodeDeployDefault.OneAtATime"
  service_role_arn       = module.iam.codedeploy_role_arn
  ec2_tag_filter {
    key   = "Deploy"
    value = "${var.app_name}-admin-${terraform.workspace}"
    type  = "KEY_AND_VALUE"
  }
}
将CodePipeline与其相关联。
#-- 省略 --#
data "aws_iam_policy_document" "codedeploy_codepipeline" {
  statement {
    effect = "Allow"
    actions = [
      "codedeploy:CreateDeployment",
      "codedeploy:GetApplication",
      "codedeploy:GetApplicationRevision",
      "codedeploy:GetDeployment",
      "codedeploy:GetDeploymentConfig",
      "codedeploy:RegisterApplicationRevision"
    ]
    resources = ["*"]
  }
}

output "codedeploy_codepipeline_policy_json" {
  value = data.aws_iam_policy_document.codedeploy_codepipeline.json
}
#-- 省略 --#
resource "aws_iam_role_policy" "codedeploy_codepipeline_application_sources" {
  role   = aws_iam_role.codepipeline_application_sources.name
  policy = module.iam.codedeploy_codepipeline_policy_json
}
#-- 省略 --#
resource "aws_codepipeline" "application_sources" {
#-- 省略 --#
  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-source"
#-- 省略 --#
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-build-admin"
#-- 省略 --#
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-deploy-admin"

    action {
      name            = "${var.app_name}-application-sources-${terraform.workspace}-deploy-admin-action"
      category        = "Deploy"
      owner           = "AWS"
      provider        = "CodeDeploy"
      input_artifacts = ["ADMIN_BUILD"]
      version         = "1"

      configuration = {
        ApplicationName     = aws_codedeploy_app.admin.name,
        DeploymentGroupName = aws_codedeploy_deployment_group.admin.deployment_group_name
      }
    }
  }
}
EC2权限设置。

由于EC2上安装的CodeDeploy Agent无法访问存储了CodePipeline构建结果的S3,因此需要添加各种权限。

data "aws_iam_policy_document" "ec2_assume_role" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      identifiers = ["ec2.amazonaws.com"]
      type        = "Service"
    }
  }
}

output "ec2_assume_role_policy_json" {
  value = data.aws_iam_policy_document.ec2_assume_role.json
}
#-- 省略 --#
data "aws_iam_policy_document" "s3_codepipeline_application_sources_codedeploy" {
  statement {
    effect = "Allow"
    actions = [
      "s3:Get*",
      "s3:List*"
    ]
    resources = [
      aws_s3_bucket.codepipeline_application_sources.arn,
      "${aws_s3_bucket.codepipeline_application_sources.arn}/*",
      "arn:aws:s3:::aws-codedeploy-us-east-2/*",
      "arn:aws:s3:::aws-codedeploy-us-east-1/*",
      "arn:aws:s3:::aws-codedeploy-us-west-1/*",
      "arn:aws:s3:::aws-codedeploy-us-west-2/*",
      "arn:aws:s3:::aws-codedeploy-ca-central-1/*",
      "arn:aws:s3:::aws-codedeploy-eu-west-1/*",
      "arn:aws:s3:::aws-codedeploy-eu-west-2/*",
      "arn:aws:s3:::aws-codedeploy-eu-west-3/*",
      "arn:aws:s3:::aws-codedeploy-eu-central-1/*",
      "arn:aws:s3:::aws-codedeploy-ap-east-1/*",
      "arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
      "arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
      "arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
      "arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
      "arn:aws:s3:::aws-codedeploy-ap-south-1/*",
      "arn:aws:s3:::aws-codedeploy-sa-east-1/*"
    ]
  }
}
resource "aws_iam_role" "ec2_codedeploy" {
  name               = "${var.app_name}-ec2-codedeploy-${terraform.workspace}"
  assume_role_policy = module.iam.ec2_assume_role_policy_json
}

resource "aws_iam_role_policy" "ec2_codedeploy" {
  role   = aws_iam_role.ec2_codedeploy.name
  policy = data.aws_iam_policy_document.s3_codepipeline_application_sources_codedeploy.json
}

resource "aws_iam_role_policy" "ec2_codedeploy_kms" {
  role   = aws_iam_role.ec2_codedeploy.name
  policy = module.iam.kms_policy_json
}

resource "aws_iam_instance_profile" "ec2_codedeploy" {
  name = "${var.app_name}-ec2-codedeploy-${terraform.workspace}"
  role = aws_iam_role.ec2_codedeploy.name
}
resource "aws_instance" "admin" {
  ami                    = "ami-011facbea5ec0363b"
  instance_type          = "t2.small"
  availability_zone      = var.availability_zone_a
  key_name               = var.key_name
  monitoring             = "false"
  subnet_id              = module.subnet.private_a_id
  vpc_security_group_ids = [module.security_group.from_jump_id, module.security_group.allow_internet_id]
+ iam_instance_profile   = aws_iam_instance_profile.ec2_codedeploy.name

  tags = {
    Name   = "${var.app_name}-admin-${terraform.workspace}"
    Deploy = "${var.app_name}-admin-${terraform.workspace}"
  }
}
#-- 省略 --#
data "aws_iam_policy_document" "codepipeline_application_sources_bucket_policy" {
  statement {
    effect = "Allow"
    principals {
      identifiers = [aws_iam_role.ec2_codedeploy.arn]
      type        = "AWS"
    }
    actions = [
      "s3:Get*",
      "s3:List*"
    ]
    resources = [
      aws_s3_bucket.codepipeline_application_sources.arn,
      "${aws_s3_bucket.codepipeline_application_sources.arn}/*"
    ]
  }
}

resource "aws_s3_bucket_policy" "codepipeline-bucket" {
  bucket = aws_s3_bucket.codepipeline_application_sources.id
  policy = data.aws_iam_policy_document.codepipeline_application_sources_bucket_policy.json
}
重新启动

由于在EC2上设置了IAM实例配置文件,所以需要重新启动。

$ aws ec2 describe-instances \
  --filter "Name=tag:Name,Values=terraform-sample-admin-stage" \
  --query "Reservations[0].Instances[0].InstanceId" \
  --profile terraform-sample
"i-XXXXXXXXXXXXXXXX"
$ aws ec2 reboot-instances \
  --instance-ids i-XXXXXXXXXXXXXXXX \
  --profile terraform-sample
确认动作
    1. 将更改推送到由var.static_contents_target_branch指定的分支上。

 

    使用以下命令进行确认。
$ aws deploy list-deployments \
  --application-name terraform-sample-admin-stage \
  --deployment-group-name terraform-sample-admin-stage \
  --query "deployments[0]" \
  --profile terraform-sample
"d-XXXXXXXXX"
$ aws deploy get-deployment \
  --deployment-id d-XXXXXXXXX \
  --query "deploymentInfo.status" \
  --profile terraform-sample
"Succeeded"

如果成功,则一切正常。
如果失败,请检查应用程序服务器中的/var/log/aws/codedeploy-agent/codedeploy-agent.log或/var/log/aws/codedeploy-agent/codedeploy-agent.YYYYMMDD.log。

查看部署的SpringBoot应用程序的状态。

[root@ip-YYY-YYY-YYY-YYY ~]# systemctl status springboot.service 
● springboot.service - springboot application
   Loaded: loaded (/etc/systemd/system/springboot.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri YYYY-MM-DD HH:mm:SS UTC; XXs ago
 Main PID: XXXX (sh)
   CGroup: /system.slice/springboot.service
           ├─XXXX /bin/sh -c /var/lib/springboot/boot.jar &>> /var/lib/springboot/logs/stage.log
           ├─XXXX /bin/bash /var/lib/springboot/boot.jar
           └─XXXX /usr/bin/java -Dsun.misc.URLClassPath.disableJarChecking=true -Dspring.profiles.active=stage -jar /var/lib/springboot/boot.jar

如果狀態為active (running),則表示正常。
如果因java.net.SocketException: Permission denied而發生錯誤,請嘗試以下指令。

[root@ip-YYY-YYY-YYY-YYY ~]# echo 'net.ipv4.ip_unprivileged_port_start=0' >> /etc/sysctl.conf

我会通过HTTP进行访问确认。

[root@ip-YYY-YYY-YYY-YYY ~]# curl -o /dev/null -w '%{http_code}\n' -s http://localhost/login
200

如果是200,那就是正常的。

允许来自应用程序服务器外部的HTTP(HTTPS)请求。

因为在当前的基础架构中,我们将使用CloudFront和ApplicationLoadBalancer(ALB)进行HTTPS认证,所以ALB-EC2实例之间将通过HTTP进行访问。
由于ALB尚未创建,我们暂时允许来自跳板服务器的请求。

#-- 中略 --#
resource "aws_security_group_rule" "jump_http_out" {
  security_group_id = aws_security_group.jump.id
  type              = "egress"
  from_port         = 80
  to_port           = 80
  protocol          = "tcp"
  cidr_blocks       = [var.vpc_cidr_block]
}
resource "aws_security_group" "application_server" {
  name   = "${var.app_name}-application-server"
  vpc_id = var.vpc_id

  tags = {
    Name = "${var.app_name}-application-server"
  }
}

resource "aws_security_group_rule" "application_server_http" {
  security_group_id        = aws_security_group.application_server.id
  type                     = "ingress"
  from_port                = 80
  to_port                  = 80
  protocol                 = "tcp"
  source_security_group_id = aws_security_group.jump.id
}

output "application_server_id" {
  value = aws_security_group.application_server.id
}
resource "aws_instance" "admin" {
  ami                    = "ami-011facbea5ec0363b"
  instance_type          = "t2.small"
  availability_zone      = var.availability_zone_a
  key_name               = var.key_name
  monitoring             = "false"
  subnet_id              = module.subnet.private_a_id
- vpc_security_group_ids = [module.security_group.from_jump_id, module.security_group.allow_internet_id]
+ vpc_security_group_ids = [module.security_group.from_jump_id, module.security_group.allow_internet_id, module.security_group.application_server_id]
  iam_instance_profile   = aws_iam_instance_profile.ec2_codedeploy.name

  tags = {
    Name   = "${var.app_name}-admin-${terraform.workspace}"
    Deploy = "${var.app_name}-admin-${terraform.workspace}"
  }
}
进行行动确认
$ curl -o /dev/null -w '%{http_code}\n' -s http://YYY.YYY.YYY.YYY/login
200

如果是200,那就是正常的。

应用负载均衡器 (ALB)

安全组
resource "aws_security_group" "alb" {
  name   = "${var.app_name}-alb"
  vpc_id = var.vpc_id

  tags = {
    Name = "${var.app_name}-alb"
  }
}

resource "aws_security_group_rule" "alb_http_in" {
  security_group_id = aws_security_group.alb.id
  type              = "ingress"
  from_port         = 80
  to_port           = 80
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "alb_http_out" {
  security_group_id        = aws_security_group.alb.id
  type                     = "egress"
  from_port                = 80
  to_port                  = 80
  protocol                 = "tcp"
  source_security_group_id = aws_security_group.application_server.id
}

output "security_group_alb_id" {
  value = aws_security_group.alb.id
}
S3用于日志

根据 Elastic Load Balancing 的说明,各个地区的 AWS 账户 ID 设置有所不同,这里我们使用东京地区的设置。

#-- 中略 --#
data "aws_iam_policy_document" "s3_logs" {
  statement {
    effect    = "Allow"
    actions   = ["s3:PutObject"]
    resources = ["${aws_s3_bucket.logs.arn}/*"]

    principals {
      type        = "AWS"
-     identifiers = [var.cloudfront_origin_access_identity_iam_arn]
+     identifiers = [
+       var.cloudfront_origin_access_identity_iam_arn,
+       "arn:aws:iam::582318560864:root"
+     ]
    }
  }
}
公共子网
#-- 中略 --#
public_cidr_block_c  = "10.1.4.0/24"
#-- 中略 --#
variable "public_cidr_block_c" {}
variable "availability_zone_c" {}
variable "public_cidr_block_c" {}

resource "aws_subnet" "public_c" {
  vpc_id            = var.vpc_id
  cidr_block        = var.public_cidr_block_c
  availability_zone = var.availability_zone_c

  tags = {
    Name = "${var.app_name}-public-c"
  }
}

output "public_c_id" {
  value = aws_subnet.public_c.id
}
-variable "availability_zone_c" {}
#-- 中略 --#
resource "aws_route_table_association" "public_c" {
  subnet_id      = aws_subnet.public_c.id
  route_table_id = aws_route_table.public.id
}
#-- 中略 --#
module "subnet" {
  source               = "../modules/subnet"
  app_name             = var.app_name
  vpc_id               = module.vpc.id
  availability_zone_a  = var.availability_zone_a
  availability_zone_c  = var.availability_zone_c
  public_cidr_block_a  = var.public_cidr_block_a
+ public_cidr_block_c  = var.public_cidr_block_c
  private_cidr_block_a = var.private_cidr_block_a
  private_cidr_block_c = var.private_cidr_block_c
}
#-- 中略 --#
ALB: 阿尔巴尼亚
resource "aws_lb" "admin" {
  name                       = "${var.app_name}-admin-${terraform.workspace}"
  internal                   = false
  load_balancer_type         = "application"
  security_groups            = [module.security_group.alb_id]
  subnets                    = [module.subnet.public_a_id, module.subnet.public_c_id]
  enable_http2               = true
  enable_deletion_protection = false

  access_logs {
    bucket  = module.s3.logs.bucket
    prefix  = "${terraform.workspace}/alb/admin"
    enabled = true
  }
}

resource "aws_alb_target_group" "admin" {
  name     = "${var.app_name}-admin-${terraform.workspace}"
  port     = 80
  protocol = "HTTP"
  vpc_id   = module.vpc.id

  health_check {
    interval            = 60
    path                = "/login"
    port                = 80
    protocol            = "HTTP"
    timeout             = 5
    unhealthy_threshold = 2
    matcher             = 200
  }
}

resource "aws_alb_listener" "admin" {
  load_balancer_arn = aws_lb.admin.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_alb_target_group.admin.arn
  }
}
允许从ALB发送HTTP请求到应用服务器。
#-- 中略 --#
-resource "aws_security_group_rule" "jump_http_out" {
-  security_group_id = aws_security_group.jump.id
-  type              = "egress"
-  from_port         = 80
-  to_port           = 80
-  protocol          = "tcp"
-  cidr_blocks       = [var.vpc_cidr_block]
-}
#-- 中略 --#
resource "aws_security_group_rule" "application_server_http" {
  security_group_id        = aws_security_group.application_server.id
  type                     = "ingress"
  from_port                = 80
  to_port                  = 80
  protocol                 = "tcp"
- source_security_group_id = aws_security_group.jump.id
+ source_security_group_id = aws_security_group.alb.id
}
将ALB添加到目标中。
#-- 省略 --#
resource "aws_alb_target_group_attachment" "admin" {
  target_group_arn = aws_alb_target_group.admin.arn
  target_id        = aws_instance.admin.id
  port             = 80
}
确认行动
$ aws elbv2 describe-load-balancers \
  --names terraform-sample-admin-stage \
  --query "LoadBalancers[0].DNSName" \
  --profile terraform-sample
"terraform-sample-admin-stage-XXXXXXXX.ap-northeast-1.elb.amazonaws.com"
$ curl -o /dev/null -w '%{http_code}\n' -s http://terraform-sample-admin-stage-XXXXXXXX.ap-northeast-1.elb.amazonaws.com/login
200

如果数字是200的话,就是正常的。

将ALB与CloudFront进行关联

我之前将默认设置为引用S3,但现在要将ALB设置为默认,并将只引用S3的静态内容进行更改。

resource "aws_cloudfront_distribution" "admin" {
  enabled             = true
  comment             = var.admin_domain
  default_root_object = "index.html"

  origin {
    origin_id   = "s3-${var.admin_domain}"
    domain_name = aws_s3_bucket.static_contents.bucket_domain_name

    s3_origin_config {
      origin_access_identity = module.cloudfront.origin_access_identity.cloudfront_access_identity_path
    }
  }

  origin {
    origin_id   = "alb-${var.admin_domain}-${terraform.workspace}"
    domain_name = aws_lb.admin.dns_name

    custom_origin_config {
      http_port                = 80
      https_port               = 443
      origin_protocol_policy   = "http-only"
      origin_ssl_protocols     = ["TLSv1", "TLSv1.1", "TLSv1.2"]
      origin_keepalive_timeout = 60
      origin_read_timeout      = 60
    }
  }

  ordered_cache_behavior {
    path_pattern           = "/js/*"
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  ordered_cache_behavior {
    path_pattern           = "/css/*"
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  ordered_cache_behavior {
    path_pattern           = "/img/*"
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  ordered_cache_behavior {
    path_pattern           = "*.html"
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  ordered_cache_behavior {
    path_pattern           = "favicon*"
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  ordered_cache_behavior {
    path_pattern           = "/"
    target_origin_id       = "s3-${var.admin_domain}"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 3600
    min_ttl                = 0
    max_ttl                = 86400

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  default_cache_behavior {
    target_origin_id       = "alb-${var.admin_domain}-${terraform.workspace}"
    allowed_methods        = ["HEAD", "DELETE", "POST", "GET", "OPTIONS", "PUT", "PATCH"]
    cached_methods         = ["HEAD", "GET"]
    compress               = false
    viewer_protocol_policy = "redirect-to-https"
    default_ttl            = 0
    min_ttl                = 0
    max_ttl                = 0

    forwarded_values {
      query_string = true
      headers      = ["*"]

      cookies {
        forward = "all"
      }
    }
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    acm_certificate_arn      = var.acm_certificate_arn
    ssl_support_method       = "sni-only"
    minimum_protocol_version = "TLSv1"
  }

  logging_config {
    bucket          = module.s3.logs.bucket_domain_name
    prefix          = "${terraform.workspace}/cloudfront/admin"
    include_cookies = false
  }

  tags = {
    Name = var.admin_domain
  }
}
请确认动作
$ aws cloudfront list-distributions \
  --query "DistributionList.Items[0].DomainName" \
  --profile terraform-sample
"XXXXXXXXXXXXXX.cloudfront.net"
$ curl -o /dev/null -w '%{http_code}\n' -s https://XXXXXXXXXXXXXX.cloudfront.net/login
200

200的话就是正常的。

自动扩展组(ASG)

由于可以在CodeDeploy上将代码部署到EC2上,因此我们将构建一个假设在ASG中运行的系统。

获得AMI。
$ aws ec2 describe-instances \
  --filter "Name=tag:Name,Values=terraform-sample-admin-stage" \
  --query "Reservations[0].Instances[0].InstanceId" \
  --profile terraform-sample \
"i-XXXXXXXXXXXXX"
$ aws ec2 create-image \
  --instance-id i-XXXXXXXXXXXXX \
  --reboot \
  --name "任意のAMI名" \
  --query "ImageId" \
  --profile terraform-sample
"ami-XXXXXXXXXXXXXXXXXX"
启动模板

这是描述 stage/admin_ec2.tf 文件除网络部分之外的内容的形象描述。

resource "aws_launch_template" "admin" {
  name        = "${var.app_name}-admin-${terraform.workspace}"
  description = "${var.app_name}-admin-${terraform.workspace}"
  image_id    = "ami-096ca23b0da9b4e9d"
  iam_instance_profile {
    arn = aws_iam_instance_profile.ec2_codedeploy.arn
  }
  instance_type           = "t2.small"
  key_name                = var.key_name
  vpc_security_group_ids  = [module.security_group.from_jump_id, module.security_group.allow_internet_id, module.security_group.application_server_id]
  disable_api_termination = false
  ebs_optimized           = false
  monitoring {
    enabled = false
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      Name   = "${var.app_name}-admin-${terraform.workspace}"
      Deploy = "${var.app_name}-admin-${terraform.workspace}"
    }
  }
}
ASG (Auto Scaling Group)

以下是描述 stage/admin_ec2.tf 文件中的网络部分的概念。在完成此部分后,将删除 stage_ec2.tf 文件。

#-- 省略 --#
resource "aws_autoscaling_group" "admin" {
  name                      = "${var.app_name}-admin-${terraform.workspace}"
  max_size                  = 10
  min_size                  = 1
  desired_capacity          = 1
  vpc_zone_identifier       = [module.subnet.private_a_id, module.subnet.private_c_id]
  default_cooldown          = 300
  health_check_grace_period = 300
  health_check_type         = "ELB"
  force_delete              = false
  target_group_arns         = [aws_alb_target_group.admin.arn]
  termination_policies      = ["Default"]
  protect_from_scale_in     = false
  launch_template {
    id      = aws_launch_template.admin.id
    version = "$Latest"
  }
}
缩放设置

我們將制定以下規則。

    1. 如果ASG组内所有实例在最近的3分钟中的2分钟内平均CPU使用率为50-70%,则增加实例数量50%。

 

    1. 如果ASG组内所有实例在最近的3分钟中的2分钟内平均CPU使用率超过70%,则增加实例数量100%。

 

    1. 如果ASG组内所有实例在最近的3分钟中的2分钟内平均CPU使用率为30-40%,则减少实例数量25%。

 

    如果ASG组内所有实例在最近的3分钟中的2分钟内平均CPU使用率低于30%,则减少实例数量50%。

我不知道如何设置实例数量的最小和最大值(例如,在10个实例运行时,通常会增加5个实例,但最少要增加7个等)。如果有人知道,请告诉我,谢谢。

#-- 省略 --#
resource "aws_autoscaling_policy" "admin_scaleout" {
  name                      = "${var.app_name}-admin-${terraform.workspace}-scaleout"
  autoscaling_group_name    = aws_autoscaling_group.admin.name
  adjustment_type           = "PercentChangeInCapacity"
  policy_type               = "StepScaling"
  estimated_instance_warmup = 300
  metric_aggregation_type   = "Average"
  step_adjustment {
    scaling_adjustment          = 50
    metric_interval_lower_bound = 0
    metric_interval_upper_bound = 20
  }
  step_adjustment {
    scaling_adjustment          = 100
    metric_interval_lower_bound = 20
  }
}

resource "aws_cloudwatch_metric_alarm" "admin_scaleout" {
  alarm_name          = "${var.app_name}-admin-${terraform.workspace}-scaleout"
  alarm_description   = "This metric monitors ec2 cpu utilization"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  period              = 60
  evaluation_periods  = 3
  datapoints_to_alarm = 2
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  statistic           = "Average"
  threshold           = 50
  actions_enabled     = true
  alarm_actions       = [aws_autoscaling_policy.admin_scaleout.arn]

  dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.admin.name
  }
  tags = {
    "Name" = "${var.app_name}-admin-${terraform.workspace}-scaleout"
  }
}

resource "aws_autoscaling_policy" "admin_scalein" {
  name                      = "${var.app_name}-admin-${terraform.workspace}-scalein"
  autoscaling_group_name    = aws_autoscaling_group.admin.name
  adjustment_type           = "PercentChangeInCapacity"
  policy_type               = "StepScaling"
  estimated_instance_warmup = 300
  metric_aggregation_type   = "Average"
  step_adjustment {
    scaling_adjustment          = -25
    metric_interval_lower_bound = -10
    metric_interval_upper_bound = 0
  }
  step_adjustment {
    scaling_adjustment          = -50
    metric_interval_upper_bound = -10
  }
}

resource "aws_cloudwatch_metric_alarm" "admin_scalein" {
  alarm_name          = "${var.app_name}-admin-${terraform.workspace}-scalein"
  alarm_description   = "This metric monitors ec2 cpu utilization"
  comparison_operator = "LessThanOrEqualToThreshold"
  period              = 60
  evaluation_periods  = 3
  datapoints_to_alarm = 2
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  statistic           = "Average"
  threshold           = 40
  actions_enabled     = true
  alarm_actions       = [aws_autoscaling_policy.admin_scalein.arn]

  dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.admin.name
  }
  tags = {
    "Name" = "${var.app_name}-admin-${terraform.workspace}-scalein"
  }
}
确认操作

我們將進行壓力測試,以確認實例是否會增加。參考 Qiita 上的 stress-ng 命令用法。

[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# amazon-linux-extras install -y epel
[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# yum install -y stress-ng
[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# stress-ng -V
stress-ng, version 0.07.29
[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# stress-ng -c 1 -l 80 -q &
$ aws autoscaling describe-scaling-activities \
  --auto-scaling-group-name terraform-sample-admin-stage \
  --query "Activities[0].Cause" \
  --profile terraform-sample
"At YYYY-MM-DDTHH:mm:ssZ a monitor alarm terraform-sample-admin-stage-scaleout in state ALARM triggered policy terraform-sample-admin-stage-scaleout changing the desired capacity from 1 to 2.  At YYYY-MM-DDTHH:mm:ssZ an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 1 to 2."
#-- 省略 --#

只要有扩展规模的意图,就会成功。不要忘记终止压力测试的流程。

[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# jobs
[1]+  Running                 stress-ng -c 1 -l 80 -q &
[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# kill %1
[1]+  Done                    stress-ng -c 1 -l 80 -q““

创建用户环境

如果此前的设置没有问题,您可以使用以下命令创建Users端的CloudFront、ALB等。
如果需要更改适当的设置,请在应用之前进行调整。

stage$ for file in admin*; do cat "${file}" | sed -e 's/admin/user/g' > `echo "${file}" | sed -e 's/admin/user/g'`; done
#-- 省略 --#
resource "aws_iam_role_policy" "codebuild_user_codepipeline_application_sources" {
  role   = aws_iam_role.codepipeline_application_sources.name
  policy = data.aws_iam_policy_document.codebuild_user.json
}
#-- 省略 --#
resource "aws_codepipeline" "application_sources" {
#-- 省略 --#
  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-source"
#-- 省略 --#
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-build-admin"
#-- 省略 --#
  }

  stage {
#-- 省略 --#
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-build-user"

    action {
      name             = "${var.app_name}-application-sources-${terraform.workspace}-build-user-action"
      category         = "Build"
      owner            = "AWS"
      provider         = "CodeBuild"
      input_artifacts  = ["SOURCE"]
      output_artifacts = ["USER_BUILD"]
      version          = "1"

      configuration = {
        ProjectName = aws_codebuild_project.user.name
      }
    }
  }

  stage {
    name = "${var.app_name}-application-sources-${terraform.workspace}-deploy-user"

    action {
      name            = "${var.app_name}-application-sources-${terraform.workspace}-deploy-user-action"
      category        = "Deploy"
      owner           = "AWS"
      provider        = "CodeDeploy"
      input_artifacts = ["USER_BUILD"]
      version         = "1"

      configuration = {
        ApplicationName     = aws_codedeploy_app.user.name,
        DeploymentGroupName = aws_codedeploy_deployment_group.user.deployment_group_name
      }
    }
  }
}

关系型数据库服务 (RDS)

resource "aws_security_group" "database" {
  name   = "${var.app_name}-database"
  vpc_id = var.vpc_id

  tags = {
    Name = "${var.app_name}-database"
  }
}

resource "aws_security_group_rule" "database_in" {
  security_group_id        = aws_security_group.database.id
  type                     = "ingress"
  from_port                = 5432
  to_port                  = 5432
  protocol                 = "tcp"
  source_security_group_id = aws_security_group.application_server.id
}

output "database_id" {
  value = aws_security_group.database.id
}
#-- 省略 --#
resource "aws_security_group_rule" "application_server_postgres" {
  security_group_id        = aws_security_group.application_server.id
  type                     = "egress"
  from_port                = 5432
  to_port                  = 5432
  protocol                 = "tcp"
  source_security_group_id = aws_security_group.database.id
}
resource "aws_db_parameter_group" "postgres" {
  name        = "${var.app_name}-${terraform.workspace}"
  description = "${var.app_name}-${terraform.workspace}"
  family      = "postgres10"

  parameter {
    name  = "timezone"
    value = "Asia/Tokyo"
  }
  parameter {
    name  = "client_encoding"
    value = "UTF8"
  }
}

resource "aws_db_subnet_group" "postgres" {
  name        = "${var.app_name}-${terraform.workspace}"
  description = "${var.app_name}-${terraform.workspace}"
  subnet_ids  = [module.subnet.private_a_id, module.subnet.private_c_id]

  tags = {
    Name = "${var.app_name}-${terraform.workspace}"
  }
}

resource "aws_db_instance" "postgres" {
  allocated_storage               = 20
  max_allocated_storage           = 30
  allow_major_version_upgrade     = false
  auto_minor_version_upgrade      = true
  apply_immediately               = true
  db_subnet_group_name            = aws_db_subnet_group.postgres.name
  parameter_group_name            = aws_db_parameter_group.postgres.name
  identifier                      = "${var.app_name}-${terraform.workspace}"
  instance_class                  = "db.t2.small"
  multi_az                        = false
  deletion_protection             = true
  enabled_cloudwatch_logs_exports = ["postgresql", "upgrade"]
  engine                          = "postgres"
  engine_version                  = "10.10"
  skip_final_snapshot             = false
  final_snapshot_identifier       = "${var.app_name}-${terraform.workspace}-final"
  storage_type                    = "gp2"
  port                            = 5432
  username                        = "postgres"
  password                        = "postgres"
  publicly_accessible             = false
  backup_retention_period         = 1
  vpc_security_group_ids          = [module.security_group.database_id]

  tags = {
    Name = "${var.app_name}-${terraform.workspace}"
  }
}

由于密码以明文形式保留下来,因此在连接确认时同时更改密码。

确认连接

$ aws rds describe-db-instances \
  --db-instance-identifier terraform-sample-stage \
  --query "DBInstances[0].Endpoint.Address" \
  --profile terraform-sample
"terraform-sample-stage.XXXXXXXXXXXXXX.ap-northeast-1.rds.amazonaws.com"
[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# amazon-linux-extras install -y postgresql10
[root@ip-ZZZ-ZZZ-ZZZ-ZZZ ~]# psql -U postgres -d postgres -h terraform-sample-stage.XXXXXXXXXXXXXX.ap-northeast-1.rds.amazonaws.com
Password for user postgres: 
psql (10.4, server 10.10)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> 

请输入新密码

postgres=> ALTER USER postgres WITH PASSWORD '任意のパスワード';

搭建生产环境

原本本来是打算将stage目录完全复制并创建,但无法共享modules目录以下的资源。
理想情况是,希望在不同环境中共享modules目录下的资源,并单独管理其他目录下的资源。
如果有任何了解这方面问题的人,期待您的评论。

善後工作

删除Terraform用户权限

$ aws iam detach-user-policy \
  --user-name terraform-sample \
  --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

最后

我在图片编辑上感到疲惫不堪。
我会在之后的某一天完成。

bannerAds