In this part of the tutorial, you will:
-
Create an input directory for VMBC.
-
Prepare the required input files.
-
Compile the HTTP example Rhino application.
-
Assemble a directory of Deployable Units (DUs).
-
Run VMBC to produce a VM CSAR.
Steps
Expand each section below to view the detailed steps.
|
|
Unless specified otherwise, run all commands from the resources directory.
|
Ensure you are in the Run You should see two files:
Unpack the VMBC archive
resources directory.
Unpack the VMBC archive with the command tar -xvf vmbc.tar.gz.ls:$ ls
vmbc vmbc-image
vmbc, an executable script that you will run to build your VM.vmbc-image, the docker container image for VMBC.
In the At this point, the
Prepare the VMBC input directory
resources directory, create a directory called vmbc-input.
Copy the rhino.license file and the JDK .tar.gz archive into the vmbc-input directory.vmbc-input directory listing should look like this:$ ls -1 vmbc-input
rhino.license
jdk<version>.tar.gz
Unpack the tutorial archive that you downloaded earlier to the The If you open the Set a default password so you can connect to the VM to diagnose issues
if the VM fails to provision the password configured during the deployment. Save the file and close the text editor after you have done this.
Create the
node-parameters.yaml file
resources directory with the command
unzip vm-build-container-tutorial.zip.resources directory should contain a file named node-parameters.yaml.node-parameters.yaml file in a text editor, you should see the following:node-metadata:
image-name: "http-example"
version: "0.1.0"
username: "rhino"
default-password: "<insert password>"
product-name: "HTTP Example Node"
display-name: "HTTP Example Node"
signaling-traffic-types:
- name: http
- name: internal
flavors:
- name: "small"
ram-mb: 16384
disk-gb: 30
vcpus: 4
azure-sku: "Standard_D4s_v4"
azure-boot-disk-type: "Premium"
- name: "medium"
ram-mb: 16384
disk-gb: 30
vcpus: 8
azure-sku: "Standard_D8s_v4"
azure-boot-disk-type: "Premium"
rhino-parameters:
clustered: false
tuning-parameters:
heap-size-mb: 3072
new-size-mb: 256
max-new-size-mb: 256
preinstall-license: false
Refer to
Node parameters file
for more information on the
node-parameters.yaml file.
To compile the example service, you will need a copy of the Rhino SDK, which you downloaded earlier.
In your development environment, extract the Rhino SDK into a directory named Within the The directory listings of the From the You should see the message
Compile the example service
rhino
with unzip -d rhino rhino-sdk-install-3.1.0.zip.rhino directory, create a subdirectory named http-example.
From your resources directory, run the following commands
to extract various parts of the HTTP RA download zip file to the right places.unzip http-3.0.0.3.zip
mkdir rhino/http-example/lib
cp -R http-3.0.0.3/lib/* rhino/http-example/lib
cp -R http-3.0.0.3/examples/* rhino/http-examplerhino and rhino/http-example directories should now be as follows:$ ls rhino
RhinoSDK/
http-example/
$ ls rhino/http-example
src/
lib/
README
build.properties
build.xml
PingSbb into a DUrhino/http-example directory, run the following ant command to compile the PingSbb.java file
and assemble the example service into a DU.ant -Dclient.home=$(pwd)/../RhinoSDK/client -Dlib=$(pwd)/lib clean build-pingBUILD SUCCESSFUL.
The assembled DU will be located at target/jars/http-ping-service.jar.
Ignore the target/jars/sbb.jar file, which is already included within http-ping-service.jar.
The example application requires the following DUs: The HTTP ping example service, which you compiled in the previous step. Create a subdirectory named
Copy all DUs into the VMBC input directory
guava, a set of core Java libraries from Google.netty, a Java networking library.http-ra, the HTTP RA itself.du in the vmbc-input directory.
Run the following commands from the resources directory to copy the DUs into the du directory:cp rhino/http-example/target/jars/http-ping-service.jar vmbc-input/du/http-ping-service.du.jar
cp http-3.0.0.3/du/guava*.jar vmbc-input/du
cp http-3.0.0.3/du/netty*.jar vmbc-input/du
cp http-3.0.0.3/du/http-ra*.jar vmbc-input/du
The above command renames
http-ping-service.jar to have a name ending in .du.jar
so that VMBC recognizes it as a DU.
Do not include the
rhino-api-compatibility jar files from the http-3.0.0.3/du directory.
Rhino already comes with these components installed.
The The If you open the Run the following from the From your
Create the build hook script archive
resources directory should contain a custom-configuration directory,
unpacked from vm-build-container-tutorial.zip earlier.custom-configuration directory should contain a file named after-rhino-import,
an example build hook script that creates the HTTP resource adaptor entity using the Rhino management console.after-rhino-import file in a text editor, you should see the following:#!/opt/tasvmruntime-py/bin/python3
# Build hook script to apply default RA configuration.
from pathlib import Path
import re
import subprocess
# Location of rhino-console.
RHINO_CONSOLE = Path.home() / "rhino" / "client" / "bin" / "rhino-console"
# Name of the RA entity.
HTTP_RA_ENTITY = "http"
def run_rhino_console(cmd: list[str]) -> str:
"""
Runs a rhino-console command.
:param cmd: The command to run and its arguments, as separate list elements.
:return: Output from rhino-console.
"""
return subprocess.check_output([RHINO_CONSOLE] + cmd, text=True, stderr=subprocess.STDOUT)
def main() -> None:
"""
Main routine.
"""
# Determine HTTP RA ID.
# The output will look like:
#
# ResourceAdaptorID[name=HTTP,vendor=OpenCloud,version=2.5]
#
# where the part within the square brackets is the RA ID.
output = run_rhino_console(["listresourceadaptors"])
for line in output.splitlines():
if matches := re.search(r"\[(name=HTTP[^\]]+)\]", line):
http_ra_id = matches.group(1)
break
else:
raise ValueError("Could not determine HTTP RA ID")
# Create an RA entity based on the HTTP RA type.
run_rhino_console(["createraentity", http_ra_id, HTTP_RA_ENTITY])
# Configure the RA entity with some default properties.
run_rhino_console(
[
"updateraentityconfigproperties",
HTTP_RA_ENTITY,
"ListenPort", "8000",
"SecureListenPort", "8002",
]
)
# This script is now done.
# The before-slee-start initconf hook script configures the IP address
# (since that isn't known until runtime),
# creates the HTTPS keystore, and activates the RA entity and service.
if __name__ == "__main__":
main()resources/custom-build directory to prepare the hook script for execution
and create a custom-build.zip archive:chmod +x after-rhino-import
zip custom-build.zip after-rhino-importresources directory, copy the custom-build.zip archive into the vmbc-input directory:cp custom-build/custom-build.zip ./vmbc-input
Refer to
build hooks
for more information on build hooks.
The The If you open the Run the following from the From the
Create the initconf hook script archive
resources directory should contain a custom-configuration directory,
unpacked from vm-build-container-tutorial.zip earlier.custom-configuration directory should contain a file named before-slee-start,
an example initconf hook script that configures the HTTP resource adaptor entity using the Rhino management console.
This script will run on the VM after Rhino has started, but before the SLEE is started.
No resource adaptors or services will need restarting to pick up the changes,
since they are not running at this stage.before-slee-start file in a text editor, you should see the following:#!/opt/tasvmruntime-py/bin/python3
# Initconf hook script to apply RA configuration, and activate the RA and service.
# This script needs to be more careful than the build hook script,
# as it will be run multiple times during a VM's lifecycle,
# and hence has to be prepared for changes it might apply to have already been made.
import os
from pathlib import Path
import re
import subprocess
import sys
import yaml
# Location of rhino-console.
RHINO_CONSOLE = Path.home() / "rhino" / "client" / "bin" / "rhino-console"
# Location of the keystore for HTTPS.
# As per the HTTP RA example documentation, the location is important
# as it must match configured Rhino permissions.
KEYSTORE = Path.home() / "rhino" / "http-ra.ks"
# Password for the keystore.
# Note: hardcoding a password like this is very insecure.
# Prefer instead to make it configurable through your configuration file.
KEYSTORE_PASSWORD = "changeit"
# Name of the RA entity.
HTTP_RA_ENTITY = "http"
def run_rhino_console(cmd: list[str]) -> str:
"""
Runs a rhino-console command.
:param cmd: The command to run and its arguments, as separate list elements.
:return: Output from rhino-console.
"""
return subprocess.check_output([RHINO_CONSOLE] + cmd, text=True, stderr=subprocess.STDOUT)
def main() -> None:
"""
Main routine.
"""
# Load the custom-config-data.yaml file.
# The first CLI argument given to this script will be the directory
# where that file can be found.
config_dir = Path(sys.argv[1])
custom_config_data = config_dir / "custom-config-data.yaml"
config_file_contents = yaml.safe_load(custom_config_data.read_text())
config = config_file_contents["deployment-config:custom-data"]["custom-config"]
listen_port = config.get("listen-port", 8000)
secure_listen_port = config.get("secure-listen-port", 8002)
# Load SDF.
sdf = config_dir / "sdf-rvt.yaml"
sdf_contents = yaml.safe_load(sdf.read_text())
# Determine this VM's signaling IP address.
# Start by locating our site and VNFC.
our_hostname = os.uname().nodename
for vnfc in [
vnfc for site in sdf_contents["msw-deployment:deployment"]["sites"]
for vnfc in site["vnfcs"]
]:
instance_hostnames = [
instance["name"] for instance in vnfc["cluster-configuration"]["instances"]
]
if our_hostname in instance_hostnames:
this_vnfc = vnfc
this_vm_index = instance_hostnames.index(our_hostname)
break
else:
raise ValueError("Couldn't find our VNFC in the SDF")
# Find the signaling network (which carries the HTTP traffic type).
# Within that, the IP address will be at the same index as above in its list of IPs.
for network in this_vnfc["networks"]:
if "http" in network["traffic-types"]:
this_vm_sig_ip = network["ip-addresses"]["ip"][this_vm_index]
break
else:
raise ValueError("Couldn't find the signaling network in the SDF")
# Generate a keystore, if we don't have one already.
if not KEYSTORE.exists():
subprocess.check_call(
[
"keytool",
"-keystore", os.fspath(KEYSTORE),
"-storepass", KEYSTORE_PASSWORD,
"-genkeypair",
"-dname", "O=Metaswitch,OU=Rhino,CN=HTTP Example Server,C=NZ"
]
)
# Configure the RA entity with some properties.
run_rhino_console(
[
"updateraentityconfigproperties",
"http",
"ListenAddress", this_vm_sig_ip,
"ListenPort", str(listen_port),
"SecureListenPort", str(secure_listen_port),
"KeyStore", os.fspath(KEYSTORE),
"KeyStorePassword", KEYSTORE_PASSWORD,
]
)
# A restart of the RA is necessary to pick up configuration changes.
# Deactivate the RA and wait until it has stopped.
# If this is the first time the script runs, the RA entity will be inactive
# and so this code doesn't need to take any action.
if HTTP_RA_ENTITY in run_rhino_console(["listraentitiesbystate", "Active"]):
run_rhino_console(["deactivateraentity", "http"])
run_rhino_console(["waittilraentityisinactive", "http"])
# Now activate the RA again.
run_rhino_console(["activateraentity", "http"])
# Determine the service ID, and whether the service is active.
# The output will look like:
#
# Services in Inactive state on node 101:
# ServiceID[name=HTTP Ping Service,vendor=OpenCloud,version=1.1]
#
# where the part within the square brackets is the service ID.
#
# If the service is not found in the list of Inactive services,
# we can assume that it is already active.
output = run_rhino_console(["listservicesbystate", "Inactive"])
for line in output.splitlines():
if matches := re.search(r"\[(name=HTTP Ping Service[^\]]+)\]", line):
# Activate the service.
service_id = matches.group(1)
run_rhino_console(["activateservice", service_id])
break
if __name__ == "__main__":
main()resources/custom-configuration directory to prepare the hook script for execution
and to create the archive:chmod +x before-slee-start
zip custom-configuration.zip before-slee-startresources directory, copy the custom-configuration.zip archive to the vmbc-input directory:cp custom-configuration/custom-configuration.zip ./vmbc-input
Refer to
initconf hooks
for more information on initconf hooks.
CSARs are signed with a private key during the build process.
The SIMPL VM checks this signature before deploying the CSAR to confirm the CSAR’s integrity.
The signing key must be in PEM format and must not be protected by a passphrase.
It must be placed in the From the
Generate a CSAR signing key
vmbc-input directory
in a file named csar-signing-key.vmbc-input directory, generate a suitable private key using the command
ssh-keygen -t RSA -b 4096 -f vmbc-input/csar-signing-key -N "" -m pem.
Once you have done this, delete the corresponding public key
by running the command rm vmbc-input/csar-signing-key.pub
as it is not required.
In this step, you will use VMBC to produce a CSAR containing a VM image
that you can deploy using the SIMPL VM. First, verify the contents of your If the contents of the You are now ready to run the VMBC will create a directory called If the build process failed and your CSAR was not created in the
Run VMBC
vmbc-input directory.$ ls -1 vmbc-input
csar-signing-key
custom-build.zip
custom-configuration.zip
du
jdk<version>.tar.gz
rhino.licensevmbc-input directory do not match the above listing,
refer to the previous steps and check if you have followed them correctly.vmbc executable script
and create your CSAR.
From the resources directory, run either ./vmbc vsphere or ./vmbc openstack
according to which VNFI you are using.
The input directory, vmbc-input, will be detected automatically based on the location of the rhino.license file.
VMBC takes around 15 minutes to build a VM.
You may want to start on the Setting up SIMPL VM and CDS
step while it is building.
target in the resources directory.
If the CSAR was built successfully,
the target directory will contain an images subdirectory
containing your CSAR with the suffix -csar.zip.
The target directory will also contain a number of log files
from various stages of the build process.images directory,
check that you have completed all steps up to this point correctly
and refer to Troubleshooting
for more information.
Retry the ./vmbc vsphere or ./vmbc openstack command once you have resolved any issues.
Result
You created a CSAR containing your VM image. It will be in the target/images directory:
ls target/images
http-example-0.1.0-vsphere-csar.zip
If you are using OpenStack, it will be named -openstack-csar.zip instead of -vsphere-csar.zip.
Next step
In the next step Setting up SIMPL VM and CDS, you will deploy the SIMPL VM and prepare for the deployment of the example VMs from the CSAR you created. Click here to progress to the next step.
