Installation and Features of Cisco VIRL

Cisco has released VIRL and provided another strong tool for Network Engineers to utilize when designing, building, troubleshooting, and even studying for any level of certification. Once bypassing the installation process, it is evidently clear that VIRL provides a variety of functionality, and being able to use this powerful tool benefits any level Network Engineer.

Installation of Cisco VIRL

The installation process for Cisco VIRL took several key steps and specific configurations in order for the program to work properly. There are multiple ways to install VIRL on a variety of platforms, and the method chosen was installing VIRL onto a VMWare vSphere (ESXi) host using the vSphere Client with the following:

  • ESXi 5.5u1 (Build 1623387)
  • HP ProLiant DL360-Gen8 Server
  • Intel® Xeon® Processor E5-2430 with VT-x/EPT

Beware! Virtualization Technology and Extended Page Tables must be supported by the CPU so consult with Intel’s ARK (Automated Relational Knowledgebase) for complete processor specifications. Also, ESXi hosts with AMD CPUs are currently not supported.

VT-x adds migration, priority, and memory handling capabilities and EPT allows for memory mapping, so check the BIOS setting on an Intel CPU that the Virtualization Technology is enabled (entering BIOS is vendor specific and needs to be assessed through the appropriate documentation).

Obtain the appropriate ESXi build and load the ISO as a mountable device onto the server for installation. Follow the steps in the installation to successfully complete the install. Once completed, configure a static IP address (if not using DHCP). Network addressing will be customized based on preference and current network configuration.

Additionally, enable the correct network adapter and do a simple ping test under ‘Test Management Network’ to check for connectivity. This will ensure the ESXi is configured and ready for the next step.

The next step is to download the OVA file in the email provided when purchasing a VIRL license. The OVA should be checked with an MD5 checksum to guarantee the file was not corrupted during download.

Before deploying the VIRL OVA, VIRL requires five unique virtual network port groups that need to be configured on the VMWare ESXi host using the vSphere Client. The five port groups include VM Network (used for management and connectivity to the internet), Flat, Flat1, SNAT, and INT (which are used for layer 2 and layer 3 connectivity).

For Flat and Flat1 port groups, enter the edit option and under the Security tab, ensure that Promiscuous Mode is selected as ‘Accept’

Afterwards, deploy the VIRL OVA from the ‘Deploy OVF Template’ in the ‘File’ drop-down menu. When deploying the OVA, select the appropriate storage, confirm ‘Thick Provision Lazy Zeroed’ option in the Disk Format, check network mappings are matching pre-configured port groups, and then select ‘Finish’ to begin the OVA deployment. Once completed, adjust any Virtual Machine Properties to match hardware resources to your liking. Cisco highly recommends the following hardware presets:

  • 4 vCPU cores (min. 2 vCPU)
  • 8 GB Memory (min. 4 GB)
  • At least 60 GB free space

If possible, with larger simulated environments, enable 6 vCPUs and 32 GB Memory. Once preferences are set, highlight Memory under the ‘Resources’ tab and enable ‘Reserve all guest memory (All locked)’ and select ‘OK’ to save any changes made.

Launch the VIRL host and login with the username virl and the password VIRL.

Since DHCP will not be used, configure static IP addressing within the ‘xterm’ terminal window. Enter the following command:

$ sudo nano /etc/network/interfaces

Change ‘eth0’ from a ‘dhcp’ to a ‘static’ interface. Set the static IP address, network mask, default gateway address, and DNS server address based on your network environment.

auto eth0
iface eth0 inet static
    address 192.168.1.3
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-namesevers 192.168.1.252 8.8.4.4

Save the configurations and restart the virtual machine with:

sudo reboot now

After configuring static IP addressing, check KVM acceleration issuing the following command:

sudo kvm-ok

This is an important step that will check whether VT-x is enabled. Previously, we made sure to enable VT-x in the BIOS settings but it still came up with an error message shown here:

virl0virl:~$ sudo kvm-ok
INFO: Your CPI does not support KBM extensions
KVM acceleration can NOT be used
virl0virl:~$

This error message may occur since VIRL has not been configured to use VT-x. Therefore, checking the VM directory in the datastore for the file named “VIRL-.vmx” will ensure this setting is enabled. Download and open the file specified and look for these lines:

virtualHW.version = "9"
vhv.enabled = "TRUE"
nvram = "VIRL.0.9.242.nvram"

Check that the ‘vhv.enable’ is set to “TRUE”. If not, change it from “FALSE”. Save the configured file back into the datastore to ensure the correct settings are being used. Now, run the KVM check again and it should work.

Once KVM acceleration is checked, configure NTP. Enter the NTP configuration file:

sudo ntpq -p

Check that the ‘vhv.enable’ is set to “TRUE”. If not, change it from “FALSE”. Save the configured file back into the datastore to ensure the correct settings are being used. Now, run the KVM check again and it should work.

Once KVM acceleration is checked, configure NTP. Enter the NTP configuration file:

sudo nano /etc/ntp.conf

Add the servers:

server 0.ubuntu.pool.ntp.org
server 1.ubuntu.pool.ntp.org
server 2.ubuntu.pool.ntp.org
server 3.ubuntu.pool.ntp.org

Restart the NTP service:

sudo service ntp stop
sudo ntpd -gq
sudo service ntp start

Afterwards, check NTP peering has been established:

sudo ntpq -p

A server preceded with an asterisk (*) indicates an NTP peering has been established.

After these necessary steps have taken place, activate VIRL. Using a web browser, access the VIRL User Workspace Management (UWM) site with the IP address given (double-click the ‘IP Address’ icon within VIRL for specific IP address to use).

Access the UWM with username ‘uwmadmin’ and password ‘password’.

Select ‘Salt status’ and then select ‘Reset Keys and ID’. Provided with you purchase, copy and paste the license key file name (excluding .pem) into the ‘Salt ID and Domain’ field. Open the license, copy, and overwrite the content in the ‘Minion private RSA key’ field. Select ‘Reset’ and then return to the ‘Salt Status’ page. Click ‘Check status now’ and confirm a successful contact.

After activation, validate the installation process with a few key commands:

neutron agent-list

sudo salt-call state.sls openstack-restart

sudo virl_health_status | grep listening

sudo service virl-std restart
sudo service virl-uwm restart

The last step, install the proper VM Maestro client from the UWM site. Open the VM Maestro client and use the default username ‘guest’ and password ‘guest’. Finally, check that all Web Services are ‘Compatible’ in green.

VIRL Features

VIRL has recently updated its node limit to 20 nodes (previously 15). An option also exists to increase the limit to 30 nodes for an additional price. These are some of the nodes that can be used in VIRL:

The following are various usernames and passwords provided by a VIRL Specialist:

  • SSH access to the VIRL VM: username=virl, password=VIRL
  • window login: username=virl, password=VIRL
  • VM Maestro default user: username=guest, password=guest
  • User Workspace management administrator: username=uwmadmin, password=password
  • Linux VM jumphost: username=your project username, typically guest, password=your project password, typically guest

If using ‘Build Initial Configurations’

  • Linux Servers default user: username=cisco, password=cisco
  • Cisco VMs: username=cisco, password=cisco

If you do NOT use ‘Build Initial Configuration’, the following apply:

  • ASAV – no default password, no default enable password, default configuration present
  • CSR1000v – no default password, no default enable password, default configuration present
  • IOSv – no default password, no default enable password, no default configuration present
  • IOSvL2 – no default password, no default enable password, no default configuration present
  • IOS XRv – default username/password = admin/admin, cisco/cisco, lab/lab, no default configuration present
  • NXOSv – default username/password = admin/admin, default configuration present
  • Linux/Cloud-init – requires cloud-init configuration injection, no default username/password present – inaccessible

Other features include AutoNet Kit. This allows for configurations to be initially built into the devices before the simulation is launched. A very efficient feature that skips all the initial steps required to build a network topology.

Another useful feature is the Live Virtualization. When a simulation is launched, it gives the ability to visually see different layouts of a network either through different physical or logical layouts.

Troubleshooting Tips

A few problems encountered were extracting configurations and having to restart some core services. The ability to extract and import configurations are one of the essential benefits of VIRL, but extracting configurations requires all nodes to have the console window closed before being able to fully extract ALL configurations. Otherwise, an error message such as this may appear:

Once all terminal windows are closed, a successful configuration extraction message will appear:

Another problem encountered were moments when accessing the UWM site would not work and some other services were not functioning. Restarting some core VIRL services (e.g. OpenStack, UWM, salt-servers) may be done to refresh full functionality with the VIRL platform. For example, in the ‘xterm’ window, enter this command to restart UWM:

Be sure to check out VIRL’s homepage and the community board (http://community.dev-innovate.com/) for specific questions. Also, check out VIRL’s YouTube channel for more information and the most up-to-date releases as more features will soon be released.