[Service Fabric] Using the Azure Files Volume driver with multiple volumes

I was recently working with a customer who has an application running in a Windows container, and that application outputs log files into different folders inside of the container. Their log extraction process was to manually remote desktop into the node, then go into the container to get the logs out.

This blog post and the samples/steps is something I used to help them understand how to use the Service Fabric Azure Files Volume driver.

Prerequisites

To implement the solution, you will be required to:

1. Already have a pre-existing Azure Container registry with your container images uploaded.

2. Already have a secure Azure Service Fabric cluster. You will need to modify your current cluster configuration to allow the use of the Service Fabric Azure Files Volume driver. This can be implemented by modification of your deployment ARM template.

3. Already have a Service Fabric container application. Modification of your Service Fabric applications ApplicationManifest.xml file for the volume driver configuration will be required. By Service Fabric container application, I mean you would use the Visual Studio template for a Service Fabric application and then choose the Container template.

What is not included in this blob post

There are improvements that can be made to this solution prior to releasing it to production. What has not been added to the solution is:

1. How to secure your Azure Storage keys in the ApplicationManifest.xml file.

2. How to secure your Azure Container Registry password in the ApplicationManifest.xml file.

3. How to secure your Azure Files shared drive endpoints.

Creating a storage account with Azure Files shares

Although your Service Fabric deployment already has at least two storage accounts, it is best to create a separate storage account to use for your Azure Files shares. This way, you can secure this storage account separately.

To create a storage account with Azure Files and then create your file shares, follow the steps on this link https://docs.microsoft.com/en-us/azure/storage/files/storage-files-quick-create-use-windows.

In regard to the file shares for this sample application, I created 3 shares:

· webapperror – Will contain log files that represent any trace information classified as LogError level or worse

· webappinfo – Will contain log files that represent any trace information classified as LogInformation or worse. This log file will also contain the container logs.

· webappwarn – Will contain log files that represent any trace information classified as LogWarning or worse

Configure your Service Fabric cluster

To order to use the Service Fabric Azure Files Volume share driver, you will need to modify your currently Service Fabric cluster configuration.

The instructions for doing all of the setup is located at https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-containers-volume-logging-drivers, but I will cover it here in more detail.

There are 2 ways to do this:

1. If you have a currently existing ARM deployment template:

a. Modify the fabricSettings section of your ARM template by adding:

image
You may decide that you want to use a different port number, and that is ok.

b. Redeploy the ARM template to update your cluster. Depending on the size of the cluster, this may take a while.

2. Use the Azure resources site.

a. Go to https://resources.azure.com. Log in with your Azure subscription credentials.

b. Click on subscriptions | <your subscription> | resourceGroups | <yourClusterResourceGroupName> | providers | Microsoft.ServiceFabric | clusters | <yourClusterName>.

image

c. Over to the right, click on the Edit button:

image

d. Find the fabricSettings section in your template and update it with the configuration information. Be careful of where you put your commas!

image

e. Click on the Put button at the top of the template. This will kick off the cluster update process that may take a while.

image

Deploying the Service Fabric Azure Files Volume driver

1. Download the PowerShell script to install the Azure Files volume driver from https://sfazfilevd.blob.core.windows.net/sfazfilevd/DeployAzureFilesVolumeDriver.zip.

2. Once you have unzipped the package, open PowerShell ISE to the directory where DeployAzureFilesVolumeDriver.ps1 file is located. Make sure you change the directory of the PowerShell command prompt window to the same directory.

3. Run the following command for Windows:
.\DeployAzureFilesVolumeDriver.ps1 -subscriptionId [subscriptionId] -resourceGroupName [resourceGroupName] -clusterName [clusterName] -windows

Or – this command for Linux

.\DeployAzureFilesVolumeDriver.ps1 -subscriptionId [subscriptionId] -resourceGroupName [resourceGroupName] -clusterName [clusterName] -linux

4. Wait until the deployment completes.

5. Open the Service Fabric Explorer to assure that the Azure Files Volume driver application has been installed:

image

Your Service Fabric application (container template) setup

Modification will be required to your ApplicationManifest.xml file for your Service Fabric container application, not the actual container image or the application within the container image. This is assuming that you have applications running in the containers, and you know where the log folders are. The sample application is located here.

Modify your ApplicationManifest.xml file

In my sample applications service manifest file, I am mapping 3 volume shares to 3 directories where my application (inside the container) drops log files:

image

You will need to add the <Volume> element in your ApplicationManifest.xml file. Pay close attention to these settings:

Source = this is the volume name. You can name it anything that you want. It does not have to match anything in your storage account or a folder path name

Destination = This is the path to your log file(s) location inside of the container.

<DriverOption – Value> – this is the name of your Azure Files share

Once you modify your ApplicationManifest.xml file, redeploy your Service Fabric container application to your cluster.

Testing the Azure Files share

If everything is working correctly, you should be able to go to your Azure Files share in the Azure portal, and see files in that share:

image

Some notes about the Azure Files share:

· Every file that is dropped into your applications log folder(s) will appear in the share

· You have the ability to download the current file from within the Azure portal or simply just view the contents

· Your application needs to have a different name for each log file, based on the node name that you have in your cluster. This way, you’ll know which node the information is coming from. Within a Service Fabric application, you can look at this link for some sample code on how to query for this information https://stackoverflow.com/questions/43959312/how-to-get-name-of-node-on-which-my-code-is-executing-in-azure-fabric-service.

If you have a guest executable running inside of the container, chances are, you are not going to have any Service Fabric framework code; therefore, you can’t use the FabricClient to query that kind of information. In this case, you need to come up with a different naming convention to know which machine the log file came from.

You can undoubtedly get the name of the machine by doing System.Environment.MachineName, but you can’t see the actual machine names in the Azure Portal or Service Fabric Explorer, so using the machine name may be too much of a challenge. Remember, if Service Fabric replaces a node for some reason, you could end up with a different machine name.

My sample application, which is a .Net Core 3.1 application (not a Service Fabric framework app), uses Serilog, and grabs the machine name.