Hashi Stack
This commit is contained in:
parent
42fe1247d3
commit
6b7cbb4e66
94
README.md
94
README.md
@ -1,2 +1,92 @@
|
|||||||
# hashi-stack
|
<h1>Complete HashiStack</h1>
|
||||||
Hashi Stack with Vagrant, Terraform, Nomad, Consul, Vault
|
|
||||||
|
<h2> Instroduction </h2>
|
||||||
|
|
||||||
|
This repository helps you to setup your development environemt and also setup production environment with 3 master nodes and 3 clients.
|
||||||
|
|
||||||
|
- [Build and Test Environment](#Build and Test Environment)
|
||||||
|
- [Enterprise Environment](#Enterprise Setup)
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
![Hashi Stack Setup](images/hashi-stack.png)
|
||||||
|
![Vault HA Setup](images/vault-ha-consul.png)
|
||||||
|
![Nomad HA Setup](images/nomad-ha.png)
|
||||||
|
|
||||||
|
## Build and Test Environment
|
||||||
|
|
||||||
|
Build and test Environment helps you to explore the tools and test your changes on vagrant. You can modify the number of servers on vagrant file to test the changes.
|
||||||
|
|
||||||
|
The Final Test Environment inclides:
|
||||||
|
|
||||||
|
- Vagrant
|
||||||
|
- Consul
|
||||||
|
- Nomad
|
||||||
|
- Vault
|
||||||
|
|
||||||
|
### Prerequsites
|
||||||
|
|
||||||
|
- MacOS (Linux testing in progress)
|
||||||
|
- [Homebrew](https://brew.sh/)
|
||||||
|
- `brew install packer terraform nomad`
|
||||||
|
- `brew cask install virtualbox`
|
||||||
|
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
Update your number of servers depends on your system memory and CPU and run below command to explore hashi tools
|
||||||
|
|
||||||
|
```
|
||||||
|
vagrant up
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access
|
||||||
|
|
||||||
|
Use the private IP address to access the applications, in this
|
||||||
|
|
||||||
|
```
|
||||||
|
Access Nomad Cluster http://172.20.20.11:4646
|
||||||
|
|
||||||
|
Access Consul Cluster http://172.20.20.11:8500
|
||||||
|
|
||||||
|
Access Vault Cluster http://172.20.20.101:8200
|
||||||
|
|
||||||
|
Access Hashi UI http://172.20.20.11:3000
|
||||||
|
```
|
||||||
|
|
||||||
|
## Enterprise Setup
|
||||||
|
|
||||||
|
This enterprise setup helps you to setup High Availability cluster with 3 masters and 2 clients on AWS.
|
||||||
|
|
||||||
|
The Final Environments Includes:
|
||||||
|
|
||||||
|
- Packer
|
||||||
|
- Terraform
|
||||||
|
- Nomad
|
||||||
|
- Consul
|
||||||
|
- Vault
|
||||||
|
|
||||||
|
### Prerequsites
|
||||||
|
|
||||||
|
- Install Packer and Terraform
|
||||||
|
- AWS access credentials
|
||||||
|
- AWS private key
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
Uset setup.sh scripts helps you to setup cluster environment on AWS. Update your AWS credentials in variables.tf and run the script
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo bash setup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access
|
||||||
|
|
||||||
|
With AWS environment we don't have an option to access UI as it's setup with Private IP, but with help of Hashi UI we can access Nomad and Consul
|
||||||
|
|
||||||
|
Use the Output of AWS public IP and access hashi UI
|
||||||
|
|
||||||
|
```
|
||||||
|
Access Hashi UI with http://awspublicip:3000
|
||||||
|
```
|
||||||
|
|
||||||
|
12
consul/client.json
Normal file
12
consul/client.json
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"server": false,
|
||||||
|
"datacenter": "us-west-1",
|
||||||
|
"node_name": "NODENAME",
|
||||||
|
"data_dir": "/etc/consul.d/data/",
|
||||||
|
"bind_addr": "PRIVATEIP",
|
||||||
|
"client_addr": "127.0.0.1",
|
||||||
|
"retry_join": [ servers ],
|
||||||
|
"log_level": "DEBUG",
|
||||||
|
"enable_syslog": true,
|
||||||
|
"acl_enforce_version_8": false
|
||||||
|
}
|
19
consul/consul.service
Normal file
19
consul/consul.service
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Consul Service Discovery Agent
|
||||||
|
Documentation=https://www.consul.io/
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=consul
|
||||||
|
Group=consul
|
||||||
|
ExecStart=/usr/local/bin/consul agent \
|
||||||
|
-config-dir=/etc/consul.d
|
||||||
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
|
KillSignal=SIGTERM
|
||||||
|
TimeoutStopSec=5
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=42s
|
||||||
|
SyslogIdentifier=consul
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
14
consul/server.json
Normal file
14
consul/server.json
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"server": true,
|
||||||
|
"node_name": "NODENAME",
|
||||||
|
"datacenter": "us-west-1",
|
||||||
|
"data_dir": "/etc/consul.d/data",
|
||||||
|
"bind_addr": "0.0.0.0",
|
||||||
|
"client_addr": "0.0.0.0",
|
||||||
|
"advertise_addr": "PRIVATEIP",
|
||||||
|
"bootstrap_expect": 1,
|
||||||
|
"ui": true,
|
||||||
|
"log_level": "DEBUG",
|
||||||
|
"enable_syslog": true,
|
||||||
|
"acl_enforce_version_8": false
|
||||||
|
}
|
15
consul/servers.json
Normal file
15
consul/servers.json
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"server": true,
|
||||||
|
"node_name": "NODENAME",
|
||||||
|
"datacenter": "us-west-1",
|
||||||
|
"data_dir": "/etc/consul.d/data",
|
||||||
|
"bind_addr": "0.0.0.0",
|
||||||
|
"client_addr": "0.0.0.0",
|
||||||
|
"advertise_addr": "PRIVATEIP",
|
||||||
|
"bootstrap_expect": count,
|
||||||
|
"retry_join": [ servers ],
|
||||||
|
"ui": true,
|
||||||
|
"log_level": "DEBUG",
|
||||||
|
"enable_syslog": true,
|
||||||
|
"acl_enforce_version_8": false
|
||||||
|
}
|
19
hashi-ui/hashi-ui.service
Normal file
19
hashi-ui/hashi-ui.service
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Run Hashi-ui
|
||||||
|
Requires=nomad.service
|
||||||
|
After=nomad.service
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
RestartSec=5
|
||||||
|
Restart=always
|
||||||
|
ExecStart=/usr/bin/docker run \
|
||||||
|
-e NOMAD_ENABLE=1 \
|
||||||
|
-e NOMAD_ADDR=http://SERVERIP:4646 \
|
||||||
|
-e CONSUL_ADDR=http://SERVERIP:8500 \
|
||||||
|
-e CONSUL_ENABLE=1 \
|
||||||
|
-e LOG_LEVEL=error \
|
||||||
|
--net=host \
|
||||||
|
jippi/hashi-ui
|
BIN
images/hashi-stack.png
Normal file
BIN
images/hashi-stack.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 32 KiB |
BIN
images/nomad_ha.png
Normal file
BIN
images/nomad_ha.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 36 KiB |
BIN
images/vault-ha-consul.png
Normal file
BIN
images/vault-ha-consul.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 88 KiB |
11
nomad/client.hcl
Normal file
11
nomad/client.hcl
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
datacenter = "us-west-1"
|
||||||
|
data_dir = "/etc/nomad.d"
|
||||||
|
|
||||||
|
client {
|
||||||
|
enabled = true
|
||||||
|
servers = ["SERVERIP:4647"]
|
||||||
|
}
|
||||||
|
bind_addr = "0.0.0.0" # the default
|
||||||
|
consul {
|
||||||
|
address = "SERVERIP:8500"
|
||||||
|
}
|
74
nomad/jobs/countdash.nomad
Normal file
74
nomad/jobs/countdash.nomad
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
job "countdash3" {
|
||||||
|
datacenters = ["us-west-1"]
|
||||||
|
|
||||||
|
group "api" {
|
||||||
|
network {
|
||||||
|
mode = "bridge"
|
||||||
|
port "http" {
|
||||||
|
to = "9001"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
name = "count-api"
|
||||||
|
port = "http"
|
||||||
|
check {
|
||||||
|
port = "http"
|
||||||
|
type = "http"
|
||||||
|
path = "/"
|
||||||
|
interval = "5s"
|
||||||
|
timeout = "2s"
|
||||||
|
address_mode = "driver"
|
||||||
|
}
|
||||||
|
connect {
|
||||||
|
sidecar_service {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
task "web" {
|
||||||
|
driver = "docker"
|
||||||
|
|
||||||
|
config {
|
||||||
|
image = "hashicorpnomad/counter-api:v1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
group "dashboard" {
|
||||||
|
network {
|
||||||
|
mode = "bridge"
|
||||||
|
|
||||||
|
port "http" {
|
||||||
|
static = 9002
|
||||||
|
to = 9002
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
name = "count-dashboard"
|
||||||
|
port = "9002"
|
||||||
|
|
||||||
|
connect {
|
||||||
|
sidecar_service {
|
||||||
|
proxy {
|
||||||
|
upstreams {
|
||||||
|
destination_name = "count-api"
|
||||||
|
local_bind_port = 8080
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
task "dashboard" {
|
||||||
|
driver = "docker"
|
||||||
|
|
||||||
|
env {
|
||||||
|
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
|
||||||
|
}
|
||||||
|
|
||||||
|
config {
|
||||||
|
image = "hashicorpnomad/counter-dashboard:v1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
48
nomad/jobs/nginx.nomad
Normal file
48
nomad/jobs/nginx.nomad
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
job "nginx" {
|
||||||
|
region = "eu"
|
||||||
|
datacenters = ["us-west-1"]
|
||||||
|
|
||||||
|
group "webserver" {
|
||||||
|
count = 4
|
||||||
|
|
||||||
|
restart {
|
||||||
|
attempts = 10
|
||||||
|
interval = "5m"
|
||||||
|
delay = "25s"
|
||||||
|
|
||||||
|
mode = "delay"
|
||||||
|
}
|
||||||
|
|
||||||
|
task "nginx" {
|
||||||
|
driver = "docker"
|
||||||
|
|
||||||
|
config {
|
||||||
|
image = "nginx:latest"
|
||||||
|
port_map {
|
||||||
|
web = 80
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
name = "nginx"
|
||||||
|
port = "web"
|
||||||
|
check {
|
||||||
|
name = "alive"
|
||||||
|
type = "tcp"
|
||||||
|
interval = "10s"
|
||||||
|
timeout = "2s"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resources {
|
||||||
|
cpu = 500 # 500 Mhz
|
||||||
|
memory = 64 # 64MB
|
||||||
|
network {
|
||||||
|
mbits = 10
|
||||||
|
port "web" {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
52
nomad/jobs/python-app.nomad
Normal file
52
nomad/jobs/python-app.nomad
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
job "docker-app" {
|
||||||
|
region = "global"
|
||||||
|
datacenters = [
|
||||||
|
"dc1"]
|
||||||
|
type = "service"
|
||||||
|
|
||||||
|
group "server" {
|
||||||
|
count = 1
|
||||||
|
|
||||||
|
task "docker-app" {
|
||||||
|
driver = "docker"
|
||||||
|
|
||||||
|
constraint {
|
||||||
|
attribute = "${attr.kernel.name}"
|
||||||
|
value = "linux"
|
||||||
|
}
|
||||||
|
|
||||||
|
config {
|
||||||
|
image = "anguda/python-flask-app:latest"
|
||||||
|
port_map {
|
||||||
|
python_server = 5000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
name = "docker-app"
|
||||||
|
port = "python_server"
|
||||||
|
|
||||||
|
tags = [
|
||||||
|
"docker",
|
||||||
|
"app"]
|
||||||
|
|
||||||
|
check {
|
||||||
|
type = "http"
|
||||||
|
path = "/test"
|
||||||
|
interval = "10s"
|
||||||
|
timeout = "2s"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resources {
|
||||||
|
memory = 256
|
||||||
|
network {
|
||||||
|
mbits = 20
|
||||||
|
port "python_server" {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
35
nomad/jobs/simple.nomad
Normal file
35
nomad/jobs/simple.nomad
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
job "http-echo-dynamic-service" {
|
||||||
|
datacenters = ["us-west-1"] group "echo" {
|
||||||
|
count = 2
|
||||||
|
task "server" {
|
||||||
|
driver = "docker"
|
||||||
|
config {
|
||||||
|
image = "hashicorp/http-echo:latest"
|
||||||
|
args = [
|
||||||
|
"-listen", ":${NOMAD_PORT_http}",
|
||||||
|
"-text", "Moin ich lausche ${NOMAD_IP_http} auf Port ${NOMAD_PORT_http}",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
resources {
|
||||||
|
network {
|
||||||
|
mbits = 15
|
||||||
|
port "http" {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
service {
|
||||||
|
name = "http-echo"
|
||||||
|
port = "http"
|
||||||
|
tags = [
|
||||||
|
"vagrant",
|
||||||
|
"urlprefix-/http-echo",
|
||||||
|
]
|
||||||
|
check {
|
||||||
|
type = "http"
|
||||||
|
path = "/health"
|
||||||
|
interval = "2s"
|
||||||
|
timeout = "2s"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
19
nomad/nomad.service
Normal file
19
nomad/nomad.service
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Nomad
|
||||||
|
Documentation=https://nomadproject.io/docs/
|
||||||
|
Wants=network-online.target
|
||||||
|
After=network-online.target
|
||||||
|
[Service]
|
||||||
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
|
ExecStart=/usr/local/bin/nomad agent -config /etc/nomad.d
|
||||||
|
KillMode=process
|
||||||
|
KillSignal=SIGINT
|
||||||
|
LimitNOFILE=infinity
|
||||||
|
LimitNPROC=infinity
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=2
|
||||||
|
StartLimitBurst=3
|
||||||
|
StartLimitIntervalSec=10
|
||||||
|
TasksMax=infinity
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
23
nomad/server.hcl
Normal file
23
nomad/server.hcl
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# /etc/nomad.d/server.hcl
|
||||||
|
|
||||||
|
datacenter = "us-west-1"
|
||||||
|
data_dir = "/etc/nomad.d/"
|
||||||
|
|
||||||
|
server {
|
||||||
|
enabled = true
|
||||||
|
bootstrap_expect = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
name = "NODENAME"
|
||||||
|
|
||||||
|
bind_addr = "PRIVATEIP"
|
||||||
|
|
||||||
|
consul {
|
||||||
|
address = "SERVERIP:8500"
|
||||||
|
}
|
||||||
|
|
||||||
|
advertise {
|
||||||
|
http = "SERVERIP"
|
||||||
|
rpc = "SERVERIP"
|
||||||
|
serf = "SERVERIP"
|
||||||
|
}
|
28
nomad/servers.hcl
Normal file
28
nomad/servers.hcl
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
# /etc/nomad.d/server.hcl
|
||||||
|
|
||||||
|
datacenter = "us-west-1"
|
||||||
|
data_dir = "/etc/nomad.d/"
|
||||||
|
|
||||||
|
server {
|
||||||
|
enabled = true
|
||||||
|
bootstrap_expect = count
|
||||||
|
server_join {
|
||||||
|
retry_join = [ servers ]
|
||||||
|
retry_max = 3
|
||||||
|
retry_interval = "15s"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
bind_addr = "PRIVATEIP"
|
||||||
|
|
||||||
|
name = "NODENAME"
|
||||||
|
|
||||||
|
consul {
|
||||||
|
address = "SERVERIP:8500"
|
||||||
|
}
|
||||||
|
|
||||||
|
advertise {
|
||||||
|
http = "SERVERIP"
|
||||||
|
rpc = "SERVERIP"
|
||||||
|
serf = "SERVERIP"
|
||||||
|
}
|
28
packer/hashi.json
Normal file
28
packer/hashi.json
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"variables": {
|
||||||
|
"aws_access_key": "AKIA4EGCAX2PFMZ6JRUP",
|
||||||
|
"aws_secret_key": "QpSw+XjC4XFzS3jM298PjyAyCecs5umWkLH3pm4R",
|
||||||
|
"ssh_keypair_name": "anurag-aws"
|
||||||
|
},
|
||||||
|
"builders": [
|
||||||
|
{
|
||||||
|
"type": "amazon-ebs",
|
||||||
|
"access_key": "{{user `aws_access_key`}}",
|
||||||
|
"secret_key": "{{user `aws_secret_key`}}",
|
||||||
|
"source_ami" : "ami-003634241a8fcdec0",
|
||||||
|
"region": "us-west-2",
|
||||||
|
"instance_type": "t2.micro",
|
||||||
|
"ssh_username": "ubuntu",
|
||||||
|
"ssh_keypair_name": "{{user `ssh_keypair_name`}}",
|
||||||
|
"ssh_private_key_file":"~/Downloads/AWS/anurag-aws.pem",
|
||||||
|
"ami_name": "hashi-example {{timestamp}}"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"provisioners": [{
|
||||||
|
"type": "shell",
|
||||||
|
"scripts": [
|
||||||
|
"prereqs.sh"
|
||||||
|
],
|
||||||
|
"execute_command" : "echo 'vagrant' | sudo -S -E bash '{{ .Path }}'"
|
||||||
|
}]
|
||||||
|
}
|
61
packer/prereqs.sh
Normal file
61
packer/prereqs.sh
Normal file
@ -0,0 +1,61 @@
|
|||||||
|
set -e
|
||||||
|
|
||||||
|
CONSUL_VERSION=1.7.3
|
||||||
|
NOMAD_VERSION=0.11.1
|
||||||
|
VAULT_VERSION=1.4.1
|
||||||
|
|
||||||
|
echo "System update..."
|
||||||
|
sudo apt update -y
|
||||||
|
echo "Installting tools.."
|
||||||
|
sudo apt install -y wget curl vim unzip jq apt-transport-https ca-certificates gnupg-agent software-properties-common
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
|
||||||
|
sudo apt-key fingerprint 0EBFCD88
|
||||||
|
sudo add-apt-repository \
|
||||||
|
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||||
|
bionic \
|
||||||
|
stable"
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
|
||||||
|
sudo systemctl start docker
|
||||||
|
sudo systemctl enable docker
|
||||||
|
|
||||||
|
wget --quiet https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip
|
||||||
|
unzip consul_${CONSUL_VERSION}_linux_amd64.zip
|
||||||
|
sudo mv consul /usr/local/bin/
|
||||||
|
sudo rm consul_${CONSUL_VERSION}_linux_amd64.zip
|
||||||
|
sudo groupadd --system consul
|
||||||
|
sudo useradd -s /sbin/nologin --system -g consul consul
|
||||||
|
sudo mkdir -p /var/lib/consul /etc/consul.d
|
||||||
|
sudo chown -R consul:consul /var/lib/consul /etc/consul.d
|
||||||
|
sudo chmod -R 775 /var/lib/consul /etc/consul.d
|
||||||
|
#sudo rm -rf /etc/systemd/system/consul.service
|
||||||
|
#sudo touch /etc/systemd/system/consul.service
|
||||||
|
|
||||||
|
echo "Installing NOMAD"
|
||||||
|
wget --quiet https://releases.hashicorp.com/nomad/${NOMAD_VERSION}/nomad_${NOMAD_VERSION}_linux_amd64.zip
|
||||||
|
unzip nomad_${NOMAD_VERSION}_linux_amd64.zip
|
||||||
|
sudo ls -lrt
|
||||||
|
sudo mv nomad /usr/local/bin/
|
||||||
|
sudo mkdir -p /etc/nomad.d
|
||||||
|
sudo rm nomad_${NOMAD_VERSION}_linux_amd64.zip
|
||||||
|
sudo groupadd --system nomad
|
||||||
|
sudo useradd -s /sbin/nologin --system -g nomad nomad
|
||||||
|
sudo mkdir -p /var/lib/nomad /etc/nomad.d
|
||||||
|
sudo chown -R nomad:nomad /var/lib/nomad /etc/nomad.d
|
||||||
|
sudo chmod -R 775 /var/lib/nomad /etc/nomad.d
|
||||||
|
|
||||||
|
#sudo touch /etc/nomad.d/nomad.hcl
|
||||||
|
echo "Installing Vault"
|
||||||
|
sudo wget --quiet https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip
|
||||||
|
sudo unzip vault_${VAULT_VERSION}_linux_amd64.zip
|
||||||
|
sudo mv vault /usr/local/bin/
|
||||||
|
sudo rm vault_${VAULT_VERSION}_linux_amd64.zip
|
||||||
|
sudo chmod +x /usr/local/bin/vault
|
||||||
|
sudo mkdir --parents /etc/vault.d
|
||||||
|
sudo groupadd --system vault
|
||||||
|
sudo useradd -s /sbin/nologin --system -g vault vault
|
||||||
|
sudo mkdir -p /var/lib/vault /etc/vault.d
|
||||||
|
sudo chown -R vault:vault /var/lib/vault /etc/vault.d
|
||||||
|
sudo chmod -R 775 /var/lib/vault /etc/vault.d
|
||||||
|
|
||||||
|
|
17
setup.sh
Normal file
17
setup.sh
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
cd packer
|
||||||
|
packer build hashi.json > packerbuild.log
|
||||||
|
ami=$(cat packerbuild.log | grep -i 'ami' | tail -1f | awk -F ':' '{print $2}')
|
||||||
|
echo $ami
|
||||||
|
if [[ ! -z "$ami" ]]; then
|
||||||
|
sed -ie "s/ami-.*/$ami\"/g" terraform/variables.tf
|
||||||
|
cd ../terraform
|
||||||
|
terraform init
|
||||||
|
terraform plan
|
||||||
|
terraform apply
|
||||||
|
else
|
||||||
|
echo "Something went wrong, please check packerbuild.log and retry"
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
4
terraform/iplist
Normal file
4
terraform/iplist
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
SERVER_IP0=172.31.25.119
|
||||||
|
SERVER_IP2=172.31.26.12
|
||||||
|
count=3
|
||||||
|
SERVER_IP1=172.31.25.211
|
148
terraform/main.tf
Normal file
148
terraform/main.tf
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
provider "aws" {
|
||||||
|
access_key= var.access_key
|
||||||
|
secret_key= var.secret_key
|
||||||
|
region= var.region
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_instance" "master" {
|
||||||
|
ami= var.ami
|
||||||
|
key_name= var.key_name
|
||||||
|
instance_type= var.master_instance_type
|
||||||
|
associate_public_ip_address = true
|
||||||
|
count = var.master_count
|
||||||
|
tags = {
|
||||||
|
Name = "${var.master_tags}-${count.index}"
|
||||||
|
}
|
||||||
|
connection {
|
||||||
|
host = self.public_ip
|
||||||
|
user = "ubuntu"
|
||||||
|
type = "ssh"
|
||||||
|
private_key = file(var.private_key_path)
|
||||||
|
timeout = "1m"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "sed -ie '/SERVER_IP${count.index}=.*/d' provision.sh"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "sed -ie '/SERVER_IP${count.index}=.*/d' iplist"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "sed -ie '/count=.*/d' iplist"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "echo count=${var.master_count} >> iplist"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "echo SERVER_IP${count.index}=${self.private_ip} >> iplist"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "sed -ie '/privateip=.*/r iplist' provision.sh"
|
||||||
|
}
|
||||||
|
provisioner "remote-exec" {
|
||||||
|
inline = [
|
||||||
|
"sudo mkdir -p /etc/nomad.d",
|
||||||
|
"sudo mkdir -p /etc/consul.d",
|
||||||
|
"sudo mkdir -p /etc/vault.d",
|
||||||
|
"sudo chmod 777 /etc/nomad.d",
|
||||||
|
"sudo chmod 777 /etc/consul.d",
|
||||||
|
"sudo chmod 777 /etc/vault.d",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../nomad/servers.hcl"
|
||||||
|
destination = "/etc/nomad.d/servers.hcl"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../nomad/nomad.service"
|
||||||
|
destination = "/etc/nomad.d/nomad.service"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../consul/servers.json"
|
||||||
|
destination = "/etc/consul.d/servers.json"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../consul/consul.service"
|
||||||
|
destination = "/etc/consul.d/consul.service"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "provision.sh"
|
||||||
|
destination = "/home/ubuntu/provision.sh"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../hashi-ui/hashi-ui.service"
|
||||||
|
destination = "/tmp/hashi-ui.service"
|
||||||
|
}
|
||||||
|
provisioner "remote-exec" {
|
||||||
|
inline = [
|
||||||
|
"chmod a+x /home/ubuntu/provision.sh",
|
||||||
|
"sudo /home/ubuntu/provision.sh",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
resource "aws_instance" "worker" {
|
||||||
|
ami= var.ami
|
||||||
|
key_name= var.key_name
|
||||||
|
instance_type= var.node_instance_type
|
||||||
|
associate_public_ip_address = true
|
||||||
|
count = var.worker_count
|
||||||
|
tags = {
|
||||||
|
Name = "${var.worker_tags}-${count.index}"
|
||||||
|
}
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "echo The server IP address is ${self.private_ip}"
|
||||||
|
}
|
||||||
|
connection {
|
||||||
|
host = self.public_ip
|
||||||
|
user = "ubuntu"
|
||||||
|
type = "ssh"
|
||||||
|
private_key = file(var.private_key_path)
|
||||||
|
timeout = "1m"
|
||||||
|
}
|
||||||
|
provisioner "remote-exec" {
|
||||||
|
inline = [
|
||||||
|
"sudo mkdir -p /etc/nomad.d",
|
||||||
|
"sudo mkdir -p /etc/consul.d",
|
||||||
|
"sudo mkdir -p /etc/vault.d",
|
||||||
|
"sudo chmod 777 /etc/nomad.d",
|
||||||
|
"sudo chmod 777 /etc/consul.d",
|
||||||
|
"sudo chmod 777 /etc/vault.d",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../nomad/client.hcl"
|
||||||
|
destination = "/etc/nomad.d/client.hcl"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../nomad/nomad.service"
|
||||||
|
destination = "/etc/nomad.d/nomad.service"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../consul/client.json"
|
||||||
|
destination = "/etc/consul.d/client.json"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../consul/consul.service"
|
||||||
|
destination = "/etc/consul.d/consul.service"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../vault/vault.service"
|
||||||
|
destination = "/etc/vault.d/vault.service"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "../vault/server.hcl"
|
||||||
|
destination = "/etc/vault.d/server.hcl"
|
||||||
|
}
|
||||||
|
provisioner "file" {
|
||||||
|
source = "provision.sh"
|
||||||
|
destination = "/home/ubuntu/provision.sh"
|
||||||
|
}
|
||||||
|
|
||||||
|
provisioner "remote-exec" {
|
||||||
|
inline = [
|
||||||
|
"chmod a+x /home/ubuntu/provision.sh",
|
||||||
|
"sudo /home/ubuntu/provision.sh",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
4
terraform/output.tf
Normal file
4
terraform/output.tf
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
output "public_ip" {
|
||||||
|
description = "Access Hashi Ui with port 3000"
|
||||||
|
value = "${aws_instance.master[0].*.public_ip}"
|
||||||
|
}
|
64
terraform/provision.sh
Normal file
64
terraform/provision.sh
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
privateip=$(hostname -i)
|
||||||
|
SERVER_IP0=172.31.25.119
|
||||||
|
count=3
|
||||||
|
SERVER_IP1=172.31.25.211
|
||||||
|
SERVER_IP2=172.31.26.12
|
||||||
|
|
||||||
|
servers='"'$SERVER_IP0'","'$SERVER_IP1'","'$SERVER_IP2'"'
|
||||||
|
|
||||||
|
if [ -f "/etc/nomad.d/servers.hcl" ]; then
|
||||||
|
sed -ie "s/PRIVATEIP/$privateip/" /etc/nomad.d/servers.hcl
|
||||||
|
sed -ie "s/PRIVATEIP/$privateip/" /etc/consul.d/servers.json
|
||||||
|
sed -ie "s/SERVERIP/$privateip/" /etc/nomad.d/servers.hcl
|
||||||
|
sed -ie "s/SERVERIP/$privateip/" /etc/consul.d/servers.json
|
||||||
|
sed -ie "s/SERVERIP/$SERVER_IP0/" /tmp/hashi-ui.service
|
||||||
|
sed -ie "s/count/$count/" /etc/nomad.d/servers.hcl
|
||||||
|
sed -ie "s/count/$count/" /etc/consul.d/servers.json
|
||||||
|
sed -ie "s/NODENAME/$HOSTNAME/" /etc/nomad.d/servers.hcl
|
||||||
|
sed -ie "s/NODENAME/$HOSTNAME/" /etc/consul.d/servers.json
|
||||||
|
|
||||||
|
sed -ie "s/servers/$servers/" /etc/consul.d/servers.json
|
||||||
|
sed -ie "s/servers/$servers/" /etc/nomad.d/servers.hcl
|
||||||
|
|
||||||
|
sudo cp -r /etc/nomad.d/nomad.service /etc/systemd/system/nomad.service
|
||||||
|
sudo cp -r /etc/consul.d/consul.service /etc/systemd/system/consul.service
|
||||||
|
|
||||||
|
# Start Consul
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable consul.service
|
||||||
|
systemctl restart consul
|
||||||
|
|
||||||
|
# Start Nomad
|
||||||
|
systemctl enable nomad.service
|
||||||
|
systemctl restart nomad
|
||||||
|
|
||||||
|
sudo cp -r /tmp/hashi-ui.service /etc/systemd/system/hashi-ui.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl restart docker
|
||||||
|
systemctl enable hashi-ui.service
|
||||||
|
systemctl restart hashi-ui
|
||||||
|
else
|
||||||
|
sed -ie "s/PRIVATEIP/$privateip/" /etc/nomad.d/client.hcl
|
||||||
|
sed -ie "s/PRIVATEIP/$privateip/" /etc/consul.d/client.json
|
||||||
|
sed -ie "s/SERVERIP/$SERVER_IP0/" /etc/consul.d/client.json
|
||||||
|
sed -ie "s/SERVERIP/$SERVER_IP0/" /etc/nomad.d/client.hcl
|
||||||
|
sed -ie "s/servers/$servers/" /etc/consul.d/client.json
|
||||||
|
sed -ie "s/NODENAME/$HOSTNAME/" /etc/consul.d/client.json
|
||||||
|
|
||||||
|
sed -ie "s/PRIVATEIP/$privateip/" /etc/vault.d/server.hcl
|
||||||
|
|
||||||
|
sudo cp -r /etc/vault.d/vault.service /etc/systemd/system/vault.service
|
||||||
|
sudo cp -r /etc/nomad.d/nomad.service /etc/systemd/system/nomad.service
|
||||||
|
sudo cp -r /etc/consul.d/consul.service /etc/systemd/system/consul.service
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable consul.service
|
||||||
|
systemctl restart consul
|
||||||
|
|
||||||
|
systemctl enable vault.service
|
||||||
|
systemctl restart vault
|
||||||
|
# Start Nomad
|
||||||
|
systemctl enable nomad.service
|
||||||
|
systemctl restart nomad
|
||||||
|
fi
|
43
terraform/variables.tf
Normal file
43
terraform/variables.tf
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
variable "access_key" {
|
||||||
|
default = "KFGH8GAja0JHDgaLJ"
|
||||||
|
}
|
||||||
|
variable "secret_key" {
|
||||||
|
default = "Hahs5HGDkjah9hdhannsG5jagdj4vgsgGKH"
|
||||||
|
}
|
||||||
|
variable "key_name" {
|
||||||
|
default = "anurag-aws"
|
||||||
|
}
|
||||||
|
variable "worker_count" {
|
||||||
|
default = 2
|
||||||
|
}
|
||||||
|
variable "master_count" {
|
||||||
|
default = 3
|
||||||
|
}
|
||||||
|
variable "region" {
|
||||||
|
default = "us-west-2"
|
||||||
|
}
|
||||||
|
variable "ami" {
|
||||||
|
default = "ami-06cb848001176ed5a"
|
||||||
|
}
|
||||||
|
variable "node_instance_type" {
|
||||||
|
default = "t2.micro"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "master_instance_type" {
|
||||||
|
default = "t2.micro"
|
||||||
|
}
|
||||||
|
variable "master_tags" {
|
||||||
|
default = "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "worker_tags" {
|
||||||
|
default = "worker"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "private_key_path" {
|
||||||
|
default = "~/Downloads/AWS/anurag-aws.pem"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "state" {
|
||||||
|
default = "running"
|
||||||
|
}
|
1
vagrant/.vagrant/bundler/global.sol
Normal file
1
vagrant/.vagrant/bundler/global.sol
Normal file
@ -0,0 +1 @@
|
|||||||
|
{"dependencies":[["vagrant-ignition",["= 0.0.3"]],["ruby_dep",["<= 1.3.1"]],["netrc",["~> 0.8"]],["mime-types-data",["~> 3.2015"]],["mime-types",[">= 1.16","< 4.0"]],["unf_ext",[">= 0"]],["unf",[">= 0.0.5","< 1.0.0"]],["domain_name",["~> 0.5"]],["http-cookie",[">= 1.0.2","< 2.0"]],["rest-client",[">= 1.6.0"]],["vagrant_cloud",["~> 2.0.3"]],["rubyntlm",["~> 0.6.0",">= 0.6.1"]],["nori",["~> 2.0"]],["multi_json",["~> 1.10"]],["little-plugger",["~> 1.1"]],["logging",[">= 1.6.1","< 3.0"]],["httpclient",["~> 2.2",">= 2.2.0.2"]],["builder",[">= 2.1.2"]],["gyoku",["~> 1.0"]],["ffi",[">= 0.5.0"]],["gssapi",["~> 1.2"]],["erubi",["~> 1.8"]],["winrm",[">= 2.3.4","< 3.0"]],["rubyzip",["~> 2.0"]],["winrm-fs",[">= 1.3.4","< 2.0"]],["winrm-elevated",[">= 1.2.1","< 2.0"]],["wdm",["~> 0.1.0"]],["rb-kqueue",["~> 0.2.0"]],["net-ssh",["~> 5.2.0"]],["net-scp",["~> 1.2.0"]],["net-sftp",["~> 2.1"]],["log4r",[">= 0"]],["hashicorp-checkpoint",["~> 0.1.5"]],["rb-inotify",["~> 0.9",">= 0.9.7"]],["rb-fsevent",["~> 0.9",">= 0.9.4"]],["listen",["~> 3.1.5"]],["concurrent-ruby",["~> 1.0"]],["i18n",[">= 0"]],["erubis",["~> 2.7.0"]],["ed25519",["~> 1.2.4"]],["childprocess",["~> 3.0.0"]],["bcrypt_pbkdf",["~> 1.0.0"]],["vagrant",[">= 1.9.2"]],["vagrant-share",["= 1.1.10"]],["micromachine",[">= 2","< 4"]],["vagrant-vbguest",["= 0.24.0"]]],"checksum":"d2980923a94947fbc443d9ce313812c43fafc9c5fde0867a46d81856681c4ce7","vagrant_version":"2.2.8"}
|
@ -0,0 +1,3 @@
|
|||||||
|
# Generated by Vagrant
|
||||||
|
|
||||||
|
default ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/Users/aguda/Downloads/harshicorp/vagrant/.vagrant/machines/default/virtualbox/private_key'
|
9
vagrant/.vagrant/rgloader/loader.rb
Normal file
9
vagrant/.vagrant/rgloader/loader.rb
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# This file loads the proper rgloader/loader.rb file that comes packaged
|
||||||
|
# with Vagrant so that encoded files can properly run with Vagrant.
|
||||||
|
|
||||||
|
if ENV["VAGRANT_INSTALLER_EMBEDDED_DIR"]
|
||||||
|
require File.expand_path(
|
||||||
|
"rgloader/loader", ENV["VAGRANT_INSTALLER_EMBEDDED_DIR"])
|
||||||
|
else
|
||||||
|
raise "Encoded files can't be read outside of the Vagrant installer."
|
||||||
|
end
|
74
vagrant/Vagrantfile
vendored
Normal file
74
vagrant/Vagrantfile
vendored
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
#sudo docker run -e NOMAD_ENABLE=1 -e NOMAD_ADDR=http://172.20.20.10:4646 -e CONSUL_ENABLE=1 -e CONSUL_ADDR=http://172.20.20.10:8500 -p 8000:3000 jippi/hashi-ui
|
||||||
|
|
||||||
|
SERVER_COUNT = 3
|
||||||
|
AGENT_COUNT = 2
|
||||||
|
|
||||||
|
def serverIP(num)
|
||||||
|
return "172.20.20.#{num+10}"
|
||||||
|
end
|
||||||
|
|
||||||
|
def agentIP(num)
|
||||||
|
return "172.20.20.#{num+100}"
|
||||||
|
end
|
||||||
|
|
||||||
|
Vagrant.configure("2") do |config|
|
||||||
|
|
||||||
|
config.vm.box = "ubuntu/bionic64"
|
||||||
|
config.vm.synced_folder "../", "/vagrant"
|
||||||
|
|
||||||
|
(1..SERVER_COUNT).each do |i|
|
||||||
|
|
||||||
|
config.vm.define vm_agent_name = "server-%d" % i do |server|
|
||||||
|
PRIVATE_IP = serverIP(i)
|
||||||
|
|
||||||
|
server.vm.hostname = vm_agent_name
|
||||||
|
server.vm.network :private_network, ip: "#{PRIVATE_IP}"
|
||||||
|
|
||||||
|
server.vm.provision :shell, :privileged => true,
|
||||||
|
inline: <<-EOF
|
||||||
|
echo "#{vm_agent_name}" | tee /tmp/nodename
|
||||||
|
echo "NODE_NAME=#{vm_agent_name}" >> /etc/environment
|
||||||
|
echo "PRIVATE_IP=#{PRIVATE_IP}" >> /etc/environment
|
||||||
|
echo "SERVER_IP=#{serverIP(i)}" >> /etc/environment
|
||||||
|
echo "count=#{SERVER_COUNT}" >> /etc/environment
|
||||||
|
echo "#{serverIP(1)}" | tee /tmp/server
|
||||||
|
EOF
|
||||||
|
|
||||||
|
server.vm.provision :shell, :path => "scripts/setup.sh", :privileged => true
|
||||||
|
server.vm.provision :file, :source => "../nomad/jobs", :destination => "/tmp/"
|
||||||
|
server.vm.provision :file, :source => "scripts/serverlist.sh", :destination => "/tmp/"
|
||||||
|
server.vm.provision :file, :source => "scripts/serverstart.sh", :destination => "/tmp/"
|
||||||
|
server.vm.provision :shell, :inline => "/bin/bash /tmp/serverlist.sh", :privileged => true
|
||||||
|
server.vm.provision :shell, :inline => "/bin/bash /tmp/serverstart.sh", :privileged => true
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
(1..AGENT_COUNT).each do |i|
|
||||||
|
config.vm.define vm_agent_name = "agent-%d" % i do |agent|
|
||||||
|
|
||||||
|
agent.vm.hostname = vm_agent_name
|
||||||
|
agent.vm.network :private_network, ip: agentIP(i)
|
||||||
|
|
||||||
|
agent.vm.provision :shell, :privileged => true,
|
||||||
|
inline: <<-EOF
|
||||||
|
echo "NODE_NAME=#{vm_agent_name}" >> /etc/environment
|
||||||
|
echo "PRIVATE_IP=#{agentIP(i)}" >> /etc/environment
|
||||||
|
echo "SERVER_IP=#{serverIP(1)}" >> /etc/environment
|
||||||
|
echo "count=#{SERVER_COUNT}" >> /etc/environment
|
||||||
|
echo "#{serverIP(1)}" | tee /tmp/server
|
||||||
|
EOF
|
||||||
|
agent.vm.provision :shell, :path => "scripts/setup.sh", :privileged => true
|
||||||
|
agent.vm.provision :file, :source => "scripts/serverlist.sh", :destination => "/tmp/"
|
||||||
|
agent.vm.provision :file, :source => "scripts/clientstart.sh", :destination => "/tmp/"
|
||||||
|
agent.vm.provision :file, :source => "scripts/vaultinit.sh", :destination => "/tmp/"
|
||||||
|
agent.vm.provision :shell, :inline => "/bin/bash /tmp/serverlist.sh", :privileged => true
|
||||||
|
agent.vm.provision :shell, :inline => "/bin/bash /tmp/clientstart.sh", :privileged => true
|
||||||
|
agent.vm.provision :shell, :inline => "/bin/bash /tmp/vaultinit.sh", :privileged => true
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
config.vm.provider "virtualbox" do |vb|
|
||||||
|
vb.memory = "1536"
|
||||||
|
end
|
||||||
|
|
||||||
|
end
|
33
vagrant/scripts/clientstart.sh
Normal file
33
vagrant/scripts/clientstart.sh
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
SERVER_IP=$(awk -F= '/SERVER_IP/ {print $2}' /etc/environment)
|
||||||
|
PRIVATE_IP=$(awk -F= '/PRIVATE_IP/ {print $2}' /etc/environment)
|
||||||
|
count=$(awk -F= '/count/ {print $2}' /etc/environment)
|
||||||
|
sudo mv -f /tmp/consul/client.json /etc/consul.d/client.json
|
||||||
|
sudo mv -f /tmp/consul/consul.service /etc/systemd/system/consul.service
|
||||||
|
sudo mv -f /tmp/nomad/nomad.service /etc/systemd/system/nomad.service
|
||||||
|
sudo mv -f /tmp/nomad/client.hcl /etc/nomad.d/client.hcl
|
||||||
|
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl restart consul
|
||||||
|
sudo systemctl enable consul
|
||||||
|
sudo systemctl restart nomad
|
||||||
|
sudo systemctl enable nomad
|
||||||
|
|
||||||
|
sleep 10
|
||||||
|
|
||||||
|
#sudo mv -f /tmp/consul/client.json /etc/consul.d/client.json
|
||||||
|
sudo mv -f /tmp/vault/vault.service /etc/systemd/system/vault.service
|
||||||
|
#sudo mv -f /tmp/consul/consul.service /etc/systemd/system/consul.service
|
||||||
|
#sudo mv -f /tmp/nomad/nomad.service /etc/systemd/system/nomad.service
|
||||||
|
#sudo mv -f /tmp/nomad/client.hcl /etc/nomad.d/client.hcl
|
||||||
|
sudo mv -f /tmp/vault/server.hcl /etc/vault.d/server.hcl
|
||||||
|
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl restart consul
|
||||||
|
sudo systemctl enable consul
|
||||||
|
sudo systemctl restart nomad
|
||||||
|
sudo systemctl enable nomad
|
||||||
|
sudo systemctl enable vault
|
||||||
|
sudo systemctl restart vault
|
||||||
|
|
||||||
|
echo -e "vagrant ssh server1\n nomad -address=http://$SERVER_IP:4646 job run /tmp/jobs/sample.nomad\n nomad -address=http://$SERVER_IP:4646 job run /tmp/jobs/python-app.nomad"
|
||||||
|
|
70
vagrant/scripts/serverlist.sh
Normal file
70
vagrant/scripts/serverlist.sh
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
PRIVATE_IP=$(awk -F= '/PRIVATE_IP/ {print $2}' /etc/environment)
|
||||||
|
SERVER_IP=$(awk -F= '/SERVER_IP/ {print $2}' /etc/environment)
|
||||||
|
NODE_NAME=$(awk -F= '/NODE_NAME/ {print $2}' /etc/environment)
|
||||||
|
count=$(awk -F= '/count/ {print $2}' /etc/environment)
|
||||||
|
echo $PRIVATE_IP
|
||||||
|
echo $SERVER_IP
|
||||||
|
echo $NODE_NAME
|
||||||
|
echo $count
|
||||||
|
|
||||||
|
SERVER=$(cat /tmp/server)
|
||||||
|
echo "Generating IP list for master server"
|
||||||
|
ip0=$(echo $SERVER | awk -F'.' '{print $4}')
|
||||||
|
ip1=$(echo $SERVER | awk -F'.' '{print $1"."$2"."$3}')
|
||||||
|
i=0
|
||||||
|
ips=$(while [ $count -gt "$i" ]
|
||||||
|
do
|
||||||
|
ip=$(echo "$ip1.$((ip0 + i))")
|
||||||
|
echo $ip
|
||||||
|
let i++
|
||||||
|
done)
|
||||||
|
lists=( $ips )
|
||||||
|
|
||||||
|
declare -a nodeips=()
|
||||||
|
for item in "${lists[@]}"
|
||||||
|
do
|
||||||
|
nodeips+=("'$item'")
|
||||||
|
done
|
||||||
|
servers=$(echo ${nodeips[@]} | sed "s/ /,/g;s/'/\"/g")
|
||||||
|
echo $servers
|
||||||
|
|
||||||
|
sudo cp -r /vagrant/consul /tmp/
|
||||||
|
sudo cp -r /vagrant/nomad /tmp/
|
||||||
|
sudo cp -r /vagrant/vault /tmp/
|
||||||
|
sudo cp -r /vagrant/hashi-ui /tmp/
|
||||||
|
|
||||||
|
sudo mkdir -p /etc/consul.d
|
||||||
|
sudo mkdir -p /etc/nomad.d
|
||||||
|
sudo mkdir -p /etc/vault.d
|
||||||
|
|
||||||
|
sudo chmod 755 /etc/nomad.d
|
||||||
|
sudo chmod 755 /etc/consul.d
|
||||||
|
sudo chmod 755 /etc/vault.d
|
||||||
|
|
||||||
|
sudo ls -lrt /tmp/
|
||||||
|
|
||||||
|
sed -ie "s/servers/$servers/" /tmp/consul/client.json
|
||||||
|
sed -ie "s/servers/$servers/" /tmp/consul/servers.json
|
||||||
|
sed -ie "s/servers/$servers/" /tmp/nomad/servers.hcl
|
||||||
|
|
||||||
|
sed -ie "s/NODENAME/$NODE_NAME/" /tmp/consul/client.json
|
||||||
|
sed -ie "s/NODENAME/$NODE_NAME/" /tmp/consul/server.json
|
||||||
|
sed -ie "s/NODENAME/$NODE_NAME/" /tmp/consul/servers.json
|
||||||
|
sed -ie "s/NODENAME/$NODE_NAME/" /tmp/nomad/server.hcl
|
||||||
|
sed -ie "s/NODENAME/$NODE_NAME/" /tmp/nomad/servers.hcl
|
||||||
|
sed -ie "s/NODENAME/$NODE_NAME/" /tmp/nomad/client.hcl
|
||||||
|
|
||||||
|
sed -ie "s/PRIVATEIP/$PRIVATE_IP/" /tmp/consul/client.json
|
||||||
|
sed -ie "s/PRIVATEIP/$PRIVATE_IP/" /tmp/consul/server.json
|
||||||
|
sed -ie "s/PRIVATEIP/$PRIVATE_IP/" /tmp/consul/servers.json
|
||||||
|
sed -ie "s/PRIVATEIP/$PRIVATE_IP/" /tmp/nomad/server.hcl
|
||||||
|
sed -ie "s/PRIVATEIP/$PRIVATE_IP/" /tmp/nomad/servers.hcl
|
||||||
|
sed -ie "s/PRIVATEIP/$PRIVATE_IP/" /tmp/vault/server.hcl
|
||||||
|
|
||||||
|
sed -ie "s/SERVERIP/$SERVER_IP/" /tmp/nomad/client.hcl
|
||||||
|
sed -ie "s/SERVERIP/$SERVER/" /tmp/nomad/server.hcl
|
||||||
|
sed -ie "s/SERVERIP/$SERVER_IP/" /tmp/nomad/servers.hcl
|
||||||
|
sed -ie "s/SERVERIP/$SERVER/" /tmp/hashi-ui/hashi-ui.service
|
||||||
|
|
||||||
|
sed -ie "s/count/$count/" /tmp/nomad/servers.hcl
|
||||||
|
sed -ie "s/count/$count/" /tmp/consul/servers.json
|
38
vagrant/scripts/serverstart.sh
Normal file
38
vagrant/scripts/serverstart.sh
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
count=$(awk -F= '/count/ {print $2}' /etc/environment)
|
||||||
|
echo "Recreating Nomad and Consul Services"
|
||||||
|
echo $count
|
||||||
|
sudo cp -r /tmp/hashi-ui/hashi-ui.service /etc/systemd/system/hashi-ui.service
|
||||||
|
sudo cp -r /tmp/consul/consul.service /etc/systemd/system/consul.service
|
||||||
|
sudo cp -r /tmp/consul/server.json /etc/consul.d/server.json
|
||||||
|
sudo cp -r /tmp/nomad/nomad.service /etc/systemd/system/nomad.service
|
||||||
|
sudo cp -r /tmp/nomad/server.hcl /etc/nomad.d/
|
||||||
|
|
||||||
|
sudo cat /tmp/hashi-ui/hashi-ui.service
|
||||||
|
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable consul
|
||||||
|
sudo systemctl enable hashi-ui
|
||||||
|
sudo systemctl enable nomad
|
||||||
|
|
||||||
|
sudo systemctl restart consul
|
||||||
|
sudo systemctl restart hashi-ui
|
||||||
|
sudo systemctl restart nomad
|
||||||
|
|
||||||
|
|
||||||
|
sudo cat /etc/nomad.d/server.hcl
|
||||||
|
|
||||||
|
sleep 10
|
||||||
|
|
||||||
|
if [ $count -gt "1" ]; then
|
||||||
|
sudo mv -f /tmp/consul/servers.json /etc/consul.d/server.json
|
||||||
|
sudo mv -f /tmp/nomad/servers.hcl /etc/nomad.d/server.hcl
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable consul
|
||||||
|
sudo systemctl enable hashi-ui
|
||||||
|
sudo systemctl enable nomad
|
||||||
|
|
||||||
|
sudo systemctl restart consul
|
||||||
|
sudo systemctl restart hashi-ui
|
||||||
|
sudo systemctl restart nomad
|
||||||
|
|
||||||
|
fi
|
67
vagrant/scripts/setup.sh
Normal file
67
vagrant/scripts/setup.sh
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
set -e
|
||||||
|
|
||||||
|
CONSUL_VERSION=1.7.3
|
||||||
|
NOMAD_VERSION=0.11.1
|
||||||
|
VAULT_VERSION=1.4.1
|
||||||
|
|
||||||
|
echo "System update..."
|
||||||
|
sudo apt update -y
|
||||||
|
echo "Installting tools.."
|
||||||
|
sudo apt install wget -y
|
||||||
|
sudo apt install curl -y
|
||||||
|
sudo apt install vim -y
|
||||||
|
sudo apt install unzip -y
|
||||||
|
sudo apt install jq -y
|
||||||
|
sudo apt-get install -y \
|
||||||
|
apt-transport-https \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
gnupg-agent \
|
||||||
|
software-properties-common
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
|
||||||
|
sudo apt-key fingerprint 0EBFCD88
|
||||||
|
sudo add-apt-repository \
|
||||||
|
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||||
|
bionic \
|
||||||
|
stable"
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
|
||||||
|
sudo systemctl start docker
|
||||||
|
sudo systemctl enable docker
|
||||||
|
|
||||||
|
wget --quiet https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip
|
||||||
|
unzip consul_${CONSUL_VERSION}_linux_amd64.zip
|
||||||
|
sudo mv consul /usr/local/bin/
|
||||||
|
sudo groupadd --system consul
|
||||||
|
sudo useradd -s /sbin/nologin --system -g consul consul
|
||||||
|
sudo mkdir -p /var/lib/consul /etc/consul.d
|
||||||
|
sudo chown -R consul:consul /var/lib/consul /etc/consul.d
|
||||||
|
sudo chmod -R 775 /var/lib/consul /etc/consul.d
|
||||||
|
#sudo rm -rf /etc/systemd/system/consul.service
|
||||||
|
#sudo touch /etc/systemd/system/consul.service
|
||||||
|
|
||||||
|
echo "Installing NOMAD"
|
||||||
|
wget --quiet https://releases.hashicorp.com/nomad/${NOMAD_VERSION}/nomad_${NOMAD_VERSION}_linux_amd64.zip
|
||||||
|
unzip nomad_${NOMAD_VERSION}_linux_amd64.zip
|
||||||
|
sudo ls -lrt
|
||||||
|
sudo mv nomad /usr/local/bin/
|
||||||
|
sudo mkdir -p /etc/nomad.d
|
||||||
|
sudo groupadd --system nomad
|
||||||
|
sudo useradd -s /sbin/nologin --system -g nomad nomad
|
||||||
|
sudo mkdir -p /var/lib/nomad /etc/nomad.d
|
||||||
|
sudo chown -R nomad:nomad /var/lib/nomad /etc/nomad.d
|
||||||
|
sudo chmod -R 775 /var/lib/nomad /etc/nomad.d
|
||||||
|
|
||||||
|
#sudo touch /etc/nomad.d/nomad.hcl
|
||||||
|
echo "Installing Vault"
|
||||||
|
sudo wget --quiet https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip
|
||||||
|
sudo unzip vault_${VAULT_VERSION}_linux_amd64.zip
|
||||||
|
sudo mv vault /usr/local/bin/
|
||||||
|
sudo rm vault_${VAULT_VERSION}_linux_amd64.zip
|
||||||
|
sudo chmod +x /usr/local/bin/vault
|
||||||
|
sudo mkdir --parents /etc/vault.d
|
||||||
|
sudo groupadd --system vault
|
||||||
|
sudo useradd -s /sbin/nologin --system -g vault vault
|
||||||
|
sudo mkdir -p /var/lib/vault /etc/vault.d
|
||||||
|
sudo chown -R vault:vault /var/lib/vault /etc/vault.d
|
||||||
|
sudo chmod -R 775 /var/lib/vault /etc/vault.d
|
9
vagrant/scripts/vaultinit.sh
Normal file
9
vagrant/scripts/vaultinit.sh
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
PRIVATE_IP=$(awk -F= '/PRIVATE_IP/ {print $2}' /etc/environment)
|
||||||
|
curl --request PUT -d '{"secret_shares": 3,"secret_threshold": 2}' -vs http://${PRIVATE_IP}:8200/v1/sys/init | jq -r '.' > ~/init.json
|
||||||
|
for item in `cat ~/init.json | jq -r '.keys_base64[]'`
|
||||||
|
do
|
||||||
|
echo $item
|
||||||
|
curl --request PUT --data '{"key":"'$item'"}' -vs http://${PRIVATE_IP}:8200/v1/sys/unseal
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Login vault http://${PRIVATE_IP}:8200 with token $(cat ~/init.json | jq -r '.root_token')"
|
19
vault/server.hcl
Normal file
19
vault/server.hcl
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
# VAULT SERVER CONFIG
|
||||||
|
|
||||||
|
ui = "true"
|
||||||
|
cluster_name = "us-west-1"
|
||||||
|
|
||||||
|
storage "consul" {
|
||||||
|
address = "127.0.0.1:8500"
|
||||||
|
path = "vault/"
|
||||||
|
}
|
||||||
|
|
||||||
|
listener "tcp" {
|
||||||
|
address = "0.0.0.0:8200"
|
||||||
|
cluster_address = "PRIVATEIP:8201"
|
||||||
|
tls_disable = "true"
|
||||||
|
}
|
||||||
|
|
||||||
|
api_addr = "http://PRIVATEIP:8200"
|
||||||
|
cluster_addr = "https://PRIVATEIP:8201"
|
||||||
|
|
16
vault/vault.service
Normal file
16
vault/vault.service
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Vault
|
||||||
|
Documentation=https://vaultproject.io/docs/
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=on-failure
|
||||||
|
ExecStart=/usr/local/bin/vault server -config /etc/vault.d/server.hcl
|
||||||
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
|
LimitMEMLOCK=infinity
|
||||||
|
KillSignal=SIGTERM
|
||||||
|
User=vault
|
||||||
|
Group=vault
|
||||||
|
RestartSec=42s
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
Loading…
Reference in New Issue
Block a user