Selasa, 12 Mei 2020

Integrating Ansible with VOSS

What is X:

  1. Ansible is an open-source software provisioning, configuration management, and application-deployment tool.
  2. VOSS (VSP Operating System Software).

What you need to prepare:

  1. OS for your ansible (I am using ubuntu 18.04 LTS on Hyper-V with multipass) --- download multipass here
  2. VOSS image (I am using VOSS 8.1) --- download image here
  3. Hypervisor for your Ansible & VOSS (I am using Hyper-V)
  4. GNS3 all-in-one (download here)
  5. GNS3 VM Hyper-V (download here) --- or you can download from GNS3 all-in-one software installation wizard. Note: if you are not using Hyper-V, you can select other hypervisor with the same version as GNS3 all-in-one. Follow instructions at bottom of this page.
  6. VOSS GNS3 template import file (download here)

Implementation:

  • After installing multipass, you could launch ubuntu-lts on your Hyper-V.

multipass launch --name ubuntu-lts
multipass list
multipass start ubuntu-lts
view raw gistfile1.txt hosted with ❤ by GitHub

  • Install Ansible on your ubuntu-lts like my previous blog.
  • Install GNS3 all-in-one. Check "GNS3 VM" option. Then next..next..finished.
  • Start booting your GNS3 VM. Make sure you have ip address assigned and reachable.
  • Open GNS3 software. Import GNS3 appliance file (*.gns3a): "File > Import appliance". Setting max vCPU, and half of your total RAM. If successful, you can add/drag VOSS 8.1 from left menu. 
  • Add a cloud to your topology like below. 

  • Assign ip address on your mgmt port. I am using subnet (172.17.176.32/28)
### login using user/pass: rwa/rwa
config t
interface mgmtEthernet mgmt
ip address 172.17.176.36/28
exit
### enable SSH service on VOSS
boot config flags sshd
ssh
save config
### test ping to ubuntu and otherwise
ping 172.17.176.40 vrf mgmtRouter
### test SSH from ubuntu to VOSS
ssh rwa@172.17.176.36
view raw gistfile1.txt hosted with ❤ by GitHub


  • Setting up your ansible playbook script. 
mkdir -p ansible-voss/{backup,host_vars}
cd ansible-voss
vim ansible.cfg
###copy-paste below --- Define list of inventory of destination node(s)
[defaults]
inventory = ./hosts
###end of copy-paste
vim hosts
###copy-paste below --- Define group name of destination node(s)
[voss-devices]
voss-1
###end of copy-paste
vim host_vars/voss-1
###copy-paste below --- IP destination, and SSH credential
ansible_host: 172.17.176.36
ansible_user: rwa
ansible_ssh_pass: rwa
ansible_connection: network_cli
ansible_network_os: voss
ansible_become: yes
ansible_become_method: enable
###end of copy-paste
vim simple_cmd.yml
###copy-paste below --- Playbook for show ip int vrf mgmtrouter
---
- name: voss config
connection: network_cli
gather_facts: False
hosts: voss-1
tasks:
- name: retrieve ip mgmtrouter
voss_command:
commands: show ip interface vrf mgmtrouter
register: output
- name: show output
debug:
var: output
###end of copy-paste
view raw gistfile1.txt hosted with ❤ by GitHub

  • Run ansible-playbook.

### Test ansible connectivity
ansible -m ping all
### Run ansible-playbook
ansible-playbook simple_cmd.yml
view raw gistfile1.txt hosted with ❤ by GitHub

Side note:
You can not convert qcow2 to vhdx file using qemu-img and then use it as virtual disk on VM creation. It will not boot to VOSS. Also, you can't add more than 8 network adapter at Hyper-V. So, GNS3 is the solution. I never tried on KVM/Qemu.

Sources:


Tidak ada komentar:

Posting Komentar