对已存在的VPC进行操作的Terraform备忘录

terraform 的强大感超棒。
因为在 terraform 0.4.2 中我对现有 VPC 进行了以下操作,特此记录。

顺便提一下,我是通过 brew install terraform 命令来进行安装的。

$  terraform
usage: terraform [--version] [--help] <command> [<args>]

Available commands are:
    apply      Builds or changes infrastructure
    destroy    Destroy Terraform-managed infrastructure
    get        Download and install modules for the configuration
    graph      Create a visual graph of Terraform resources
    init       Initializes Terraform configuration from a module
    output     Read an output from a state file
    plan       Generate and show an execution plan
    push       Upload this Terraform module to Atlas to run
    refresh    Update local state file against real resources
    remote     Configure remote state storage
    show       Inspect Terraform state or plan
    taint      Manually mark a resource for recreation
    version    Prints the Terraform version

$  terraform --version
Terraform v0.4.2

$  ls -l /usr/local/bin/terraform*
lrwxr-xr-x  1 hoge huga 39 Apr 27 14:21 /usr/local/bin/terraform -> ../Cellar/terraform/0.4.2/bin/terraform
lrwxr-xr-x  1 hoge huga 54 Apr 27 14:21 /usr/local/bin/terraform-provider-atlas -> ../Cellar/terraform/0.4.2/bin/terraform-provider-atlas
lrwxr-xr-x  1 hoge huga 52 Apr 27 14:21 /usr/local/bin/terraform-provider-aws -> ../Cellar/terraform/0.4.2/bin/terraform-provider-aws
lrwxr-xr-x  1 hoge huga 59 Apr 27 14:21 /usr/local/bin/terraform-provider-cloudflare -> ../Cellar/terraform/0.4.2/bin/terraform-provider-cloudflare
lrwxr-xr-x  1 hoge huga 59 Apr 27 14:21 /usr/local/bin/terraform-provider-cloudstack -> ../Cellar/terraform/0.4.2/bin/terraform-provider-cloudstack
lrwxr-xr-x  1 hoge huga 55 Apr 27 14:21 /usr/local/bin/terraform-provider-consul -> ../Cellar/terraform/0.4.2/bin/terraform-provider-consul
lrwxr-xr-x  1 hoge huga 61 Apr 27 14:21 /usr/local/bin/terraform-provider-digitalocean -> ../Cellar/terraform/0.4.2/bin/terraform-provider-digitalocean
lrwxr-xr-x  1 hoge huga 52 Apr 27 14:21 /usr/local/bin/terraform-provider-dme -> ../Cellar/terraform/0.4.2/bin/terraform-provider-dme
lrwxr-xr-x  1 hoge huga 57 Apr 27 14:21 /usr/local/bin/terraform-provider-dnsimple -> ../Cellar/terraform/0.4.2/bin/terraform-provider-dnsimple
lrwxr-xr-x  1 hoge huga 55 Apr 27 14:21 /usr/local/bin/terraform-provider-docker -> ../Cellar/terraform/0.4.2/bin/terraform-provider-docker
lrwxr-xr-x  1 hoge huga 55 Apr 27 14:21 /usr/local/bin/terraform-provider-google -> ../Cellar/terraform/0.4.2/bin/terraform-provider-google
lrwxr-xr-x  1 hoge huga 55 Apr 27 14:21 /usr/local/bin/terraform-provider-heroku -> ../Cellar/terraform/0.4.2/bin/terraform-provider-heroku
lrwxr-xr-x  1 hoge huga 56 Apr 27 14:21 /usr/local/bin/terraform-provider-mailgun -> ../Cellar/terraform/0.4.2/bin/terraform-provider-mailgun
lrwxr-xr-x  1 hoge huga 53 Apr 27 14:21 /usr/local/bin/terraform-provider-null -> ../Cellar/terraform/0.4.2/bin/terraform-provider-null
lrwxr-xr-x  1 hoge huga 58 Apr 27 14:21 /usr/local/bin/terraform-provider-openstack -> ../Cellar/terraform/0.4.2/bin/terraform-provider-openstack
lrwxr-xr-x  1 hoge huga 58 Apr 27 14:21 /usr/local/bin/terraform-provider-terraform -> ../Cellar/terraform/0.4.2/bin/terraform-provider-terraform
lrwxr-xr-x  1 hoge huga 56 Apr 27 14:21 /usr/local/bin/terraform-provisioner-file -> ../Cellar/terraform/0.4.2/bin/terraform-provisioner-file
lrwxr-xr-x  1 hoge huga 62 Apr 27 14:21 /usr/local/bin/terraform-provisioner-local-exec -> ../Cellar/terraform/0.4.2/bin/terraform-provisioner-local-exec
lrwxr-xr-x  1 hoge huga 63 Apr 27 14:21 /usr/local/bin/terraform-provisioner-remote-exec -> ../Cellar/terraform/0.4.2/bin/terraform-provisioner-remote-exec

很简单,感觉不错。

另外,我尝试通过从源代码安装,并创建/usr/local/bin/terraform目录并设置环境变量,但是出现了”provider aws not found”的错误,可能必须使用/usr/local/bin路径才行。

做过的事情 de

    • サブネット作成

10.0.0.0/24
10.0.1.0/24

ルートテーブル作成

nat インスタンスへのデフォルトルート
Office へのスタティックルート

Network ACL作成

内向き全許可
外向きで 25 ポートだけ拒否

Variable

变量将被命名为variables.tf。
假设VPC和Office之间建立了VPN,并且VPC内部有nat实例。

variable "my-env" {
    default = {
        access_key = "**************"
        secret_key = "************************"
        region = "ap-northeast-1"
        vpc_id = "vpc-******"
        az_b = "ap-northeast-1a"
        az_c = "ap-northeast-1b"
        nat_id = "i-*******"
        office_gw = "vgw-******"z
    }
}

子网

resource "aws_subnet" "test-1" {
    vpc_id = "${var.my-env.vpc_id}"
    cidr_block = "10.0.0.0/24"
    availability_zone = "ap-northeast-1a"
    tags {
        Name = "test-1"
    }
}

resource "aws_subnet" "test-2" {
    vpc_id = "${var.my-env.vpc_id}"
    cidr_block = "10.0.1.0/24"
    availability_zone = "ap-northeast-1b"
    tags {
        Name = "test-2"
    }
}

路由表

resource "aws_route_table" "test-rtb" {
    vpc_id = "${var.vpc_id}"
    route {
            cidr_block = "0.0.0.0/0"
            instance_id = "${var.nat_id}"
    }
    route {
            cidr_block = "192.168.1.0/24"
            gateway_id = "${var.office_gw}"
    }
}

resource "aws_route_table_association" "test-1" {
    subnet_id = "${aws_subnet.test-1.id}"
    route_table_id = "${aws_route_table.test-rtb.id}"
}

resource "aws_route_table_association" "test-2" {
    subnet_id = "${aws_subnet.test-2.id}"
    route_table_id = "${aws_route_table.test-rtb.id}"
}

网络访问控制列表

resource "aws_network_acl" "test-1_acl" {
    vpc_id ="${var.vpc_id}"
    subnet_id = "${aws_subnet.test-1.id}"
    ingress = {
        rule_no = 100
        protocol = "all"
        action = "allow"
        from_port = 0
        to_port = 65535
        cidr_block = "0.0.0.0/0"
    }
    egress {
       rule_no = 50
        protocol = "tcp"
        action = "deny"
        from_port = 25
        to_port = 25
        cidr_block = "0.0.0.0/0"

    egress {
       rule_no = 100
        protocol = "all"
        action = "allow"
        from_port = 0
        to_port = 65535
        cidr_block = "0.0.0.0/0"
    }
}

resource "aws_network_acl" "test-2_acl" {
    vpc_id ="${var.vpc_id}"
    subnet_id = "${aws_subnet.test-2.id}"
    ingress = {
        rule_no = 100
        protocol = "all"
        action = "allow"
        from_port = 0
        to_port = 65535
        cidr_block = "0.0.0.0/0"
    }
    egress {
       rule_no = 50
        protocol = "tcp"
        action = "deny"
        from_port = 25
        to_port = 25
        cidr_block = "0.0.0.0/0"
    }
    egress {
       rule_no = 100
        protocol = "all"
        action = "allow"
        from_port = 0
        to_port = 65535
        cidr_block = "0.0.0.0/0"
    }
}

执行

文件如下所示。

$ tree
.
├── aws.tf
├── nacl.tf
├── route_tables.tf
├── subnets.tf
└── variables.tf

0 directories, 5 files

在使用之前,请先确认您的计划。

$ terraform plan

申请适用。

$ terraform apply

只要这样做,就能够完成。太厉害了。
使用terraform destroy可以轻松地摧毁它,还可以在创建测试环境等情况下进行复用和操作,似乎很容易和省时。

↓ 我参考了以下链接:http://ghost.ponpokopon.me/provider-digitalocean-not-found/

bannerAds