Upload a file from an Azure Windows Server Core machine to Azure Blob storage

The title for this could be a lot longer like ‘how to upload a file using the Azure CLI to Azure storage on a Windows Server 2016 Core DataCenter’ because that’s what this blog post is about…but that’s a ridiculously long title. The point is, with a Core OS, you have no UI and very few capabilities and tools like you normally would on a full UI OS. Here’s what happened…

I was working on what appeared to be an issue with the new Azure Backup and Restore service on an Azure Service Fabric Cluster. I needed to generate a crash dump from the FabricBRS service and send it to Azure support.

I remote desktopped in to the VM in my VM Scaleset that was being used to host the FabricBRS service, ran ‘taskmgr’ in the command prompt and then right-clicked on the FabricBRS service and selected ‘create dump file’.

image

The crash dump file is written to a location like C:\Users\<yourlogin>\AppData\Local\Temp\2\FabricBRS.DMP. And this is where the fun began. How do I get the file from here to my Azure Storage account?

1. To make it easy to find the file, I copied it from the above directory right to the C:\ drive on the machine.

2. I figured the easiest way to do this was to install the Azure CLI and then use the CLI with my storage account to upload the file to blob storage. To download the MSI file from https://aks.ms/installazurecliwindows, from your current command prompt window, type ‘PowerShell’. This will start PowerShell in your current command prompt. However, what you need to do, is open a new PowerShell command prompt.

To do this, run the following command:

Invoke-Item C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

3. Within the new PowerShell command prompt, to download the file, I ran the following commands:
$downloader = New-Object System.Net.WebClient $downloader.DownloadFile(“https://aka.ms/installazurecliwindows”,”C:\azure-cli-2.0.54.msi”)

You can do a ‘dir’ on your C:\ drive to make sure its there.

image

4. To install the Azure CLI, run the command:

Start-Process C:\azure-cli-2.0.54.msi

You will be taken through a wizard that should step you through the install process. When you are finished, you are SUPPOSED to be able to just type in ‘az’ in to the command prompt window and get back some response from the Azure CLI. However, I found that I instead had to go out to the Azure portal and restart my VM Scaleset and then log back in to the machine to the command prompt.

5. At this point, the Azure CLI should be available and you know where your crash dump file is, now you need to upload it to your Azure Storage account, in a blob storage container.

6. Make sure you have a storage account and a specific container to upload the file in to. Grab the name of your storage account and then the storage account key.

7. Run the following command using the Azure CLI:

az storage blob upload –account-name <your-storage-account-name> –account-key <your-storage-account-key> –container-name <your-container-name> –-file ‘C:\FabricBRS.DMP’ –name ‘FabricBRS.DMP’

Its hard to see from the command above, but for each parameter, there is actually a double hypen ‘- -‘ with no space in between for each parameter. Also, the ‘name’ parameter is the name you want the file to be shown as in your blob storage container.

I hope this blog post saves someone who needs to do the same type of exercise a bit of time!

[Service Fabric] Securing an Azure Service Fabric cluster with Azure Active Directory via the Azure Portal

To make sure appropriate credit is given to a great article on this topic, I did take some of the information for the steps below from this article https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-via-arm. The article found at the link focuses on using an ARM template to do the deployment and secure setup. Using ARM would be the way you would want to set this up in a production environment.

My post is going to be using a more manual approach where you set the cluster up via the portal, for those of you who are just testing this out and want to learn how things are done via the portal. Some of the steps for setting this up in the portal will be short and will point to another article for portal setup of the cluster, but it’s the security setup we want to focus on.

So we will discuss:

  • Setup of your cluster certificate
  • Setup of Azure AD
  • Azure Cluster creation
  • Testing your Admin and Read-only user login

Setting up your cluster certificate

My purpose for using Azure Active Directory (AAD) was to setup an admin user and a read-only user so they could access the Service Fabric Explorer with those permissions. You still need to have a cluster certificate to secure the cluster though.

  1. Open PowerShell ISE as an Administrator.
  2. In the PowerShell command window, log in to your Azure subscription using ‘Login-AzureRMAccount’. When you do this, in the command window you will see the subscriptionID. You need to copy the subscriptionID, because you will need that in the PowerShell script to create the Azure AD application. Also copy the tenantID value.
  3. Run the following PS script. This script create a new resource group for your key vault, a key vault, a self-signed certificate and a secret key. You will need to record the information that appears in the PS command prompt output window after the successful execution of this script. Fill in the variables with your own values. Note that it is usually best just to save the script code below to a PS file first.
#-----You have to change the variable values-------#
# This script will:
#  1. Create a new resource group for your key vaults
#  2. Create a new key vault
#  3. Create, export and import (to your certificate store) a self-signed certificate
#  4. Create a new secret and put the cert in key vault
#  5. Output the values you will need to supply to your cluster for cluster cert security.
#     Make sure you copy these values before closing the PowerShell window
#--------------------------------------------------#

#Name of the Key Vault service
$KeyVaultName = "<your-vault-name>" 
#Resource group for the Key-Vault service. 
$ResourceGroup = "<vault-resource-group-name>"
#Set the Subscription
$subscriptionId = "<your-Azure-subscription-ID>" 
#Azure data center locations (East US", "West US" etc)
$Location = "<region>"
#Password for the certificate
$Password = "<certificate-password>"
#DNS name for the certificate
$CertDNSName = "<name-of-your-certificate>"
#Name of the secret in key vault
$KeyVaultSecretName = "<secret-name-for-cert-in-vault>"
#Path to directory on local disk in which the certificate is stored  
$CertFileFullPath = "C:\<local-directory-to-place-your-exported-cert>\$CertDNSName.pfx"

#If more than one under your account
Select-AzureRmSubscription -SubscriptionId $subscriptionId
#Verify Current Subscription
Get-AzureRmSubscription -SubscriptionId $subscriptionId


#Creates the a new resource group and Key-Vault
New-AzureRmResourceGroup -Name $ResourceGroup -Location $Location
New-AzureRmKeyVault -VaultName $KeyVaultName -ResourceGroupName $ResourceGroup -Location $Location -sku standard -EnabledForDeployment 

#Converts the plain text password into a secure string
$SecurePassword = ConvertTo-SecureString -String $Password -AsPlainText -Force

#Creates a new selfsigned cert and exports a pfx cert to a directory on disk
$NewCert = New-SelfSignedCertificate -CertStoreLocation Cert:\CurrentUser\My -DnsName $CertDNSName 
Export-PfxCertificate -FilePath $CertFileFullPath -Password $SecurePassword -Cert $NewCert
Import-PfxCertificate -FilePath $CertFileFullPath -Password $SecurePassword -CertStoreLocation Cert:\LocalMachine\My 

#Reads the content of the certificate and converts it into a json format
$Bytes = [System.IO.File]::ReadAllBytes($CertFileFullPath)
$Base64 = [System.Convert]::ToBase64String($Bytes)

$JSONBlob = @{
    data = $Base64
    dataType = 'pfx'
    password = $Password
} | ConvertTo-Json

$ContentBytes = [System.Text.Encoding]::UTF8.GetBytes($JSONBlob)
$Content = [System.Convert]::ToBase64String($ContentBytes)

#Converts the json content a secure string
$SecretValue = ConvertTo-SecureString -String $Content -AsPlainText -Force

#Creates a new secret in Azure Key Vault
$NewSecret = Set-AzureKeyVaultSecret -VaultName $KeyVaultName -Name $KeyVaultSecretName -SecretValue $SecretValue -Verbose

#Writes out the information you need for creating a secure cluster
Write-Host
Write-Host "Resource Id: "$(Get-AzureRmKeyVault -VaultName $KeyVaultName).ResourceId
Write-Host "Secret URL : "$NewSecret.Id
Write-Host "Thumbprint : "$NewCert.Thumbprint

The information you need to record will appear similar to this:

Resource Id: /subscriptions/<your-subscriptionID>/resourceGroups/<your-resource-group>/providers/Microsoft.KeyVault/vaults/<your-vault-name>

Secret URL : https://<your-vault-name>.vault.azure.net:443/secrets/<secret>/<generated-guid>

Thumbprint : <certificate-thumbprint>

Setting up Azure Active Directory

  1. To secure the cluster with Azure AD, you will need to decide which AD directory in your subscription you will be using. In this example, we will use the ‘default’ directory. In step 2 above, you should have recorded the ‘tenantID’. This is the ID associated with your default Active directory. NOTE: If you have more than one directory (or tenant) in your subscription, you are going to have to make sure you get the right tenantID from your AAD administrator. The first piece of script you need to save to a file named Common.ps1 is:
<#
.VERSION
1.0.3

.SYNOPSIS
Common script, do not call it directly.
#>

if($headers){
    Exit
}

Try
{
    $FilePath = Join-Path $PSScriptRoot "Microsoft.IdentityModel.Clients.ActiveDirectory.dll"
    Add-Type -Path $FilePath
}
Catch
{
    Write-Warning $_.Exception.Message
}

function GetRESTHeaders()
{
	# Use common client 
    $clientId = "1950a258-227b-4e31-a9cf-717495945fc2"
    $redirectUrl = "urn:ietf:wg:oauth:2.0:oob"
    
    $authenticationContext = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext -ArgumentList $authString, $FALSE

    $accessToken = $authenticationContext.AcquireToken($resourceUrl, $clientId, $redirectUrl, [Microsoft.IdentityModel.Clients.ActiveDirectory.PromptBehavior]::RefreshSession).AccessToken
    $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
    $headers.Add("Authorization", $accessToken)
    return $headers
}

function CallGraphAPI($uri, $headers, $body)
{
    $json = $body | ConvertTo-Json -Depth 4 -Compress
    return (Invoke-RestMethod $uri -Method Post -Headers $headers -Body $json -ContentType "application/json")
}

function AssertNotNull($obj, $msg){
    if($obj -eq $null -or $obj.Length -eq 0){ 
        Write-Warning $msg
        Exit
    }
}

# Regional settings
switch ($Location)
{
    "china"
    {
        $resourceUrl = "https://graph.chinacloudapi.cn"
        $authString = "https://login.partner.microsoftonline.cn/" + $TenantId
    }

    default
    {
        $resourceUrl = "https://graph.windows.net"
        $authString = "https://login.microsoftonline.com/" + $TenantId
    }
}

$headers = GetRESTHeaders

if ($ClusterName)
{
    $WebApplicationName = $ClusterName + "_Cluster"
    $WebApplicationUri = "https://$ClusterName"
    $NativeClientApplicationName = $ClusterName + "_Client"
}

You do not need to execute this script, it will be called by the next script, so make sure you have Common.ps1 in the same folder as the next script.

2.  Create a new script file and paste in the code below. Name this file SetupApplications.ps1. Note that you will need to record some of the output from the execute of this script (explained below) for later use.

<#
.VERSION
1.0.3

.SYNOPSIS
Setup applications in a Service Fabric cluster Azure Active Directory tenant.

.PREREQUISITE
1. An Azure Active Directory tenant.
2. A Global Admin user within tenant.

.PARAMETER TenantId
ID of tenant hosting Service Fabric cluster.

.PARAMETER WebApplicationName
Name of web application representing Service Fabric cluster.

.PARAMETER WebApplicationUri
App ID URI of web application.

.PARAMETER WebApplicationReplyUrl
Reply URL of web application. Format: https://<Domain name of cluster>:<Service Fabric Http gateway port>

.PARAMETER NativeClientApplicationName
Name of native client application representing client.

.PARAMETER ClusterName
A friendly Service Fabric cluster name. Application settings generated from cluster name: WebApplicationName = ClusterName + "_Cluster", NativeClientApplicationName = ClusterName + "_Client"

.PARAMETER Location
Used to set metadata for specific region: china. Ignore it in global environment.

.EXAMPLE
. Scripts\SetupApplications.ps1 -TenantId '4f812c74-978b-4b0e-acf5-06ffca635c0e' -ClusterName 'MyCluster' -WebApplicationReplyUrl 'https://mycluster.westus.cloudapp.azure.com:19080'

Setup tenant with default settings generated from a friendly cluster name.

.EXAMPLE
. Scripts\SetupApplications.ps1 -TenantId '4f812c74-978b-4b0e-acf5-06ffca635c0e' -WebApplicationName 'SFWeb' -WebApplicationUri 'https://SFweb' -WebApplicationReplyUrl 'https://mycluster.westus.cloudapp.azure.com:19080' -NativeClientApplicationName 'SFnative'

Setup tenant with explicit application settings.

.EXAMPLE
. $ConfigObj = Scripts\SetupApplications.ps1 -TenantId '4f812c74-978b-4b0e-acf5-06ffca635c0e' -ClusterName 'MyCluster' -WebApplicationReplyUrl 'https://mycluster.westus.cloudapp.azure.com:19080'

Setup and save the setup result into a temporary variable to pass into SetupUser.ps1
#>

Param
(
    [Parameter(ParameterSetName='Customize',Mandatory=$true)]
    [Parameter(ParameterSetName='Prefix',Mandatory=$true)]
    [String]
	$TenantId,

    [Parameter(ParameterSetName='Customize')]	
	[String]
	$WebApplicationName,

    [Parameter(ParameterSetName='Customize')]
	[String]
	$WebApplicationUri,

    [Parameter(ParameterSetName='Customize',Mandatory=$true)]
    [Parameter(ParameterSetName='Prefix',Mandatory=$true)]
	[String]
    $WebApplicationReplyUrl,
	
    [Parameter(ParameterSetName='Customize')]
	[String]
	$NativeClientApplicationName,

    [Parameter(ParameterSetName='Prefix',Mandatory=$true)]
    [String]
    $ClusterName,

    [Parameter(ParameterSetName='Prefix')]
    [Parameter(ParameterSetName='Customize')]
    [ValidateSet('china')]
    [String]
    $Location
)

Write-Host 'TenantId = ' $TenantId

. "$PSScriptRoot\Common.ps1"

$graphAPIFormat = $resourceUrl + "/" + $TenantId + "/{0}?api-version=1.5"
$ConfigObj = @{}
$ConfigObj.TenantId = $TenantId

$appRole = 
@{
    allowedMemberTypes = @("User")
    description = "ReadOnly roles have limited query access"
    displayName = "ReadOnly"
    id = [guid]::NewGuid()
    isEnabled = "true"
    value = "User"
},
@{
    allowedMemberTypes = @("User")
    description = "Admins can manage roles and perform all task actions"
    displayName = "Admin"
    id = [guid]::NewGuid()
    isEnabled = "true"
    value = "Admin"
}

$requiredResourceAccess =
@(@{
    resourceAppId = "00000002-0000-0000-c000-000000000000"
    resourceAccess = @(@{
        id = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"
        type= "Scope"
    })
})

if (!$WebApplicationName)
{
	$WebApplicationName = "ServiceFabricCluster"
}

if (!$WebApplicationUri)
{
	$WebApplicationUri = "https://ServiceFabricCluster"
}

if (!$NativeClientApplicationName)
{
	$NativeClientApplicationName =  "ServiceFabricClusterNativeClient"
}

#Create Web Application
$uri = [string]::Format($graphAPIFormat, "applications")
$webApp = @{
    displayName = $WebApplicationName
    identifierUris = @($WebApplicationUri)
    homepage = $WebApplicationReplyUrl #Not functionally needed. Set by default to avoid AAD portal UI displaying error
    replyUrls = @($WebApplicationReplyUrl)
    appRoles = $appRole
}

switch ($Location)
{
    "china"
    {
        $oauth2Permissions = @(@{
            adminConsentDescription = "Allow the application to access " + $WebApplicationName + " on behalf of the signed-in user."
            adminConsentDisplayName = "Access " + $WebApplicationName
            id = [guid]::NewGuid()
            isEnabled = $true
            type = "User"
            userConsentDescription = "Allow the application to access " + $WebApplicationName + " on your behalf."
            userConsentDisplayName = "Access " + $WebApplicationName
            value = "user_impersonation"
        })
        $webApp.oauth2Permissions = $oauth2Permissions
    }
}

$webApp = CallGraphAPI $uri $headers $webApp
AssertNotNull $webApp 'Web Application Creation Failed'
$ConfigObj.WebAppId = $webApp.appId
Write-Host 'Web Application Created:' $webApp.appId

#Service Principal
$uri = [string]::Format($graphAPIFormat, "servicePrincipals")
$servicePrincipal = @{
    accountEnabled = "true"
    appId = $webApp.appId
    displayName = $webApp.displayName
    appRoleAssignmentRequired = "true"
}
$servicePrincipal = CallGraphAPI $uri $headers $servicePrincipal
$ConfigObj.ServicePrincipalId = $servicePrincipal.objectId

#Create Native Client Application
$uri = [string]::Format($graphAPIFormat, "applications")
$nativeAppResourceAccess = $requiredResourceAccess +=
@{
    resourceAppId = $webApp.appId
    resourceAccess = @(@{
        id = $webApp.oauth2Permissions[0].id
        type= "Scope"
    })
}
$nativeApp = @{
    publicClient = "true"
    displayName = $NativeClientApplicationName
    replyUrls = @("urn:ietf:wg:oauth:2.0:oob")
    requiredResourceAccess = $nativeAppResourceAccess
}
$nativeApp = CallGraphAPI $uri $headers $nativeApp
AssertNotNull $nativeApp 'Native Client Application Creation Failed'
Write-Host 'Native Client Application Created:' $nativeApp.appId
$ConfigObj.NativeClientAppId = $nativeApp.appId

#Service Principal
$uri = [string]::Format($graphAPIFormat, "servicePrincipals")
$servicePrincipal = @{
    accountEnabled = "true"
    appId = $nativeApp.appId
    displayName = $nativeApp.displayName
}
$servicePrincipal = CallGraphAPI $uri $headers $servicePrincipal

#OAuth2PermissionGrant

#AAD service principal
$uri = [string]::Format($graphAPIFormat, "servicePrincipals") + '&$filter=appId eq ''00000002-0000-0000-c000-000000000000'''
$AADServicePrincipalId = (Invoke-RestMethod $uri -Headers $headers).value.objectId

$uri = [string]::Format($graphAPIFormat, "oauth2PermissionGrants")
$oauth2PermissionGrants = @{
    clientId = $servicePrincipal.objectId
    consentType = "AllPrincipals"
    resourceId = $AADServicePrincipalId
    scope = "User.Read"
    startTime = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffffff")
    expiryTime = (Get-Date).AddYears(1800).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffffff")
}
CallGraphAPI $uri $headers $oauth2PermissionGrants | Out-Null
$oauth2PermissionGrants = @{
    clientId = $servicePrincipal.objectId
    consentType = "AllPrincipals"
    resourceId = $ConfigObj.ServicePrincipalId
    scope = "user_impersonation"
    startTime = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffffff")
    expiryTime = (Get-Date).AddYears(1800).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffffff")
}
CallGraphAPI $uri $headers $oauth2PermissionGrants | Out-Null

$ConfigObj

#ARM template
Write-Host
Write-Host '-----ARM template-----'
Write-Host '"azureActiveDirectory": {'
Write-Host ("  `"tenantId`":`"{0}`"," -f $ConfigObj.TenantId)
Write-Host ("  `"clusterApplication`":`"{0}`"," -f $ConfigObj.WebAppId)
Write-Host ("  `"clientApplication`":`"{0}`"" -f $ConfigObj.NativeClientAppId)
Write-Host "},"

3.  Execute the following command from the PS command prompt window:

.\SetupApplications.ps1 -TenantId ‘<your-tenantID>’ -ClusterName ‘<your-cluster-name>.<region>.cloudapp.azure.com’ -WebApplicationReplyUrl ‘https://<your-cluster-name>.<region>.cloudapp.azure.com:19080/Explorer/index.html

The ClusterName is used to prefix the AAD applications created by the script. It does not need to match the actual cluster name exactly as it is only intended to make it easier for you to map AAD artifacts to the Service Fabric cluster that they’re being used with. This can be a bit confusing because you haven’t created your cluster yet. But, if you know what name you plan to give your cluster, you can use it here.

The WebApplicationReplyUrl is the default endpoint that AAD returns to your users after completing the sign-in process. You should set this to the Service Fabric Explorer endpoint for your cluster, which by default is:

https://<cluster_domain>:19080/Explorer

For a full list of AAD helper scripts, you can find more of these at http://servicefabricsdkstorage.blob.core.windows.net/publicrelease/MicrosoftAzureServiceFabric-AADHelpers.zip.

Record the information at the bottom of the command prompt window. You will need this information when you deploy your cluster from the portal. The information will look similar to what you see below.

“azureActiveDirectory”: {

“tenantId”:”<Your-AAD-tenantID>”,

“clusterApplication”:”1xxxxxxxx-x68e-490a-89c8-2894e4b8686a”,

“clientApplication”:”xxxxxxx-7825-4e1e-a586-f0ff8d9e679e”

NOTE: You may receive a Warning that you have a missing assembly. You can ignore this warning.

4.  After you run the script in step 6, log in to the classic Azure portal at https://manage.windowsazure.com. For now you need to use the classic Azure portal because the production portal Azure Active Directory features are still in preview.

5.  Find your Azure Active Directory in the list and click on it.

clip_image001

6.  Add 2 new users to your directory. Name them whatever you want just as long as you know which one is Admin and which one would be the read-only user. Make sure to record the password that is initially generated, because the first time you try to log in to the portal as this user, you will be asked to change the password.

7.  Within your AAD, click on the Applications menu. In the Show drop-down box, pick Applications My Company Owns and then click on the check button over to the right to do a search.

clip_image002

8.  You should see two applications listed. One will be for Native client applications and the other for Web Applications. Click on the application name for the web application type. Since we will be doing our connectivity test connecting to the Service Fabric Explorer web UI, this is the application we need to set the user permissions on.

clip_image003

9.  Click on the Users menu.

10.  Click on the user name that should be the administrator and then select the Assign button at the bottom of the portal Window.

11.  In the Assign Users dialog box, pick Admin from the dropdown box and select the check button.

clip_image004

12.  Repeat 10 and 11 but this time, for the read-only user select Read-only from the Assign Users drop-down. This step completes what you will need to do in the classic portal, so you can close the classic portal window.

13.  You now have all the information you need to create your cluster in the portal. Log in to the Azure Portal at https://portal.azure.com.

Creating your Service Fabric cluster

  1. Create a new resource group and then within the resource group start the process of adding a new Service Fabric Cluster to the resource group. As you are stepping through creating the cluster, you will find 4 core blades with information you need to provide:
    • Basic – unique name for the cluster, operating system, username/password for RDP access etc.
    • Cluster configuration – node type count, node configuration, diagnostics etc.
    • Security – This is where we want to focus in the next step….

NOTE: If you want more details about creating your Service Fabric cluster via the portal, go to https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-via-portal ~ the procedures at this link uses certificates for Admin and Read-only user setup, not AAD.

2.  In the cluster Security blade, make sure the Security mode is set to Secure. It is by default.

3.  The output from the first PS script you executed will contain the information you need for the Primary certificate information. After you enter your recorded information, make sure to select the Configure advanced settings checkbox.

clip_image005

4.  By selecting the Configure advanced settings checkbox, the blade will expand (vertically) and you can scroll down in the blade to find the area where you need to enter the Active Directory information. The information you recorded when you executed SetupApplications.ps1 will be used here.

clip_image006

5.  Select the Ok button in the Security blade.

6.  Complete the Summary blade after the portal validates your settings.

Testing your Admin and Read-only user access

  1. Once the cluster has completed the creation process, make sure you log out of the Azure portal. This assures that when you attempt to log in as the Admin or Read-only user, you will not accidentally log in to the portal as the subscription administrator.
  2. Log in to the portal as either the Admin or Read-only user. You will need to change the temporary password you were provided early and then the log in will complete.
  3. Open up a new browser window and log in to https://<yourfullclustername>:19080/Explorer/. Test the Explorer functionality.

 

Hope this helps you in your work with Azure Service Fabric!

[Service Fabric] Azure Service Fabric with Azure Diagnostics and Nlog log retrieval

I was recently working with a customer where we were migrating some of their Azure Cloud Services over to Azure Service Fabric.

Within their current Cloud Services, there is a process where NLog will drop a log file to a Local Resource folder. Azure Diagnostics is then used to pick up this log file and place it in a container in Azure storage for another process to pick up for processing.

With Service Fabric, we do not have the same concept of a Local Resource folder. Also, with Service Fabric, or more importantly Azure Resource Manager deployments, the only way to set Azure Diagnostics settings is to do this through an ARM template.

In order to get the NLog process of dropping the file to a folder to work, the following JSON code has to be added to the ARM template:

clip_image001

In order to find where to put the above code, you need to find the Service Fabric VM Scale set cluster (shown below in Visual Studio), select it, then in the JSON code, scroll down to where you see “publisher” : “Microsoft.Azure.Diagnostics”:

clip_image002

The blob storage account, blob container name, node folder and VM Scale set node name can be parameterized for more flexibility.

At this point in time, what cannot be parameterized or modified is how Azure Diagnostics places the file in to the blob container. Azure Diagnostics setups up a specific folder structure under the container that looks like this:

clip_image003

For more information on some of the differences between Cloud Services and Service Fabric, check out:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cloud-services-migration-differences

 

Hope this helps you in your work with Azure Service Fabric!

[Service Fabric] Connecting to a remote Azure Service Fabric cluster using PowerShell

If you have struggled with getting the syntax and parameters correct on the Connect-ServiceFabricCluster PowerShell cmdlet like I have, I’ve decided to post the command with associated parameters that I know works every time.

This command is typically used to test connectivity from your personal machine to a .X509 secured cluster in Azure.

$ClusterName= “<your-full-cluster-name>:19000”

$CertThumbprint= “<your-certificate-thumbprint>”

Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 `    -X509Credential `

    -ServerCertThumbprint $CertThumbprint  `

    -FindType FindByThumbprint `

    -FindValue $CertThumbprint `

    -StoreLocation CurrentUser `

    -StoreName My

Upon successful connection, you should see something similar to

clip_image001

Hope this helps you in your work with Azure Service Fabric!

[Service Fabric] Unable to determine whether the application is installed on the cluster or not

 

I was working with a customer that was attempting to deploy a simple Service Fabric application from within Visual Studio 2015 to a secure cluster in Azure.

The cluster itself was secured via an .X509 certificate stored in Azure Key Vault. There were currently no applications deployed to the Azure cluster.

When we opened up the publish dialog in Visual Studio, the first thing we noticed was the red circular ‘x’ icon over to the right of the name of the cluster endpoint address. The error message shown here is actually a pretty common one that can be caused by connectivity issues with the remote cluster ‘Failed to contact the server. Please try again later or get help from “How to configure secure connections”

clip_image001

 

We decided first to confirm that the thumbprint was correct and that it was installed on the machine from where the deployment was being attempted.

We then decided to go ahead and attempt to publish the app to the Azure cluster just to see what happens and see what error message appears. In the output window, we saw this:

 

1>—— Build started: Project: HealthApp, Configuration: Debug x64 ——

2>—— Publish started: Project: HealthApp, Configuration: Debug x64 ——

2>Unable to determine whether the application is installed on the cluster or not

========== Build: 1 succeeded, 0 failed, 3 up-to-date, 0 skipped ==========

========== Publish: 0 succeeded, 1 failed, 0 skipped ==========

 

Strange error. Was Visual Studio actually connecting to the cluster and then could not confirm that the app was there or was Visual Studio actually NOT connecting to the cluster so of course, it could not determine if the app was there?

To make sure that from the deployment machine we could actually connect to the cluster, we opened PowerShell ISE (as an administrator) and ran the following command:

 

$ClusterName= “myclustername.eastus.cloudapp.azure.com:19000”

$CertThumbprint= “<my-certificate-thumbprint>”

Connect-serviceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 `

-X509Credential `

-ServerCertThumbprint $CertThumbprint `

-FindType FindByThumbprint `

-FindValue $CertThumbprint `

-StoreLocation CurrentUser `

-StoreName My

 

We confirmed cluster connectivity:

clip_image002

What ensued over the next few hours were redeployments of certificates, testing other apps for deployment, testing from other machines.

 

Eventually we discovered that the problem was there was a hidden white space in front of the value for the thumbprint in the publish dialog:

clip_image003

And when I say hidden, I mean there was no indication at all that there was anything there. Until we manually re-entered the certificate thumbprint and tried a new deployment, it still appeared that Visual Studio did not have connectivity to the cluster. Once we deployed successfully, the red icon went away.

 

Hope this helps you in your work with Azure Service Fabric!

[Service Fabric] Default service descriptions must not be modified as part of an upgrade

Recently (Service Fabric SDK v 2.4.145) I had deployed a Service Fabric application to my local cluster from Visual Studio. As part of an upgrade process after I changed something in the Settings.xml file for my service, I was executing the following PowerShell code:

 

Connect-ServiceFabricCluster localhost:19000

  Copy-ServiceFabricApplicationPackage -ApplicationPackagePath ‘.’ `

-ImageStoreConnectionString ‘file:C:\SfDevCluster\Data\ImageStoreShare‘ -ApplicationPackagePathInImageStore ‘SFConfigModified’

 Register-ServiceFabricApplicationType -ApplicationPathInImageStore ‘SFConfigModified’ -TimeoutSec 300

Start-ServiceFabricApplicationUpgrade `

-ApplicationName ‘fabric:/SFConfigModified’ `

-ApplicationTypeVersion ‘1.0.17’ `

-Monitored `

-ForceRestart `

-FailureAction Rollback

This is when I received the following error in the PowerShell command window:

 

Registering application type…

Register application type succeeded

Start-ServiceFabricApplicationUpgrade : Default service descriptions must not be modified as

part of upgrade. Modified default service: fabric:/SFConfigModified/StatelessWebApi

At C:\Workshops\AzureSF\HelperFiles\testconfigchanged.ps1:13 char:1

+ Start-ServiceFabricApplicationUpgrade `

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : InvalidOperation: (Microsoft.Servi…usterConnection:ClusterConne

ction) [Start-ServiceFabricApplicationUpgrade], FabricException

+ FullyQualifiedErrorId : UpgradeApplicationErrorId,Microsoft.ServiceFabric.Powershell.Star

tApplicationUpgrade

 

For some reason, the upgrade thinks I have modified my ApplicationManifest.xml file in the <DefaultServices> section, although I know I did not modify it.

 

I have discovered that this issue is something that will be corrected in the near future, but how do you get this working again to be able to do updates from PowerShell?

 

Here is what I had to do to get things working again:

  1. Open Service Fabric Explorer and remove the application from the cluster.
  2. In my PS script, after the Register-ServiceFabricApplicationType command, execute the command to do a new deployment:
    New-ServiceFabricApplication -ApplicationName ‘fabric:/SFConfigModified’ -ApplicationTypeName ‘SFConfigModifiedType’ -ApplicationTypeVersion 1.0.15
  3. Comment out the New-ServiceFabricApplication line of code.

 

From this point on, I could do upgrades to my application.

Handy Links for HPC and Azure

Lately, I’ve been working with customers who are doing both Windows HPC Server 2008 R2 SP1 and Windows Azure and it seems that the experience level of customers run the full range of just getting started in both areas, to being experienced in either one technology or the other.

I decided to put together a non-exhaustive set of links that could prove useful to those of all experience levels.  I state that this list is non-exhaustive because there are always more links, more blogs, more books and so on that can be found on the subject but I have found that these links seem to be the most commonly used.  And if I find other very useful links and information along the way, I’ll make sure to update the list.

HPC Server

What is HPC Server?

What’s New in Windows HPC Server 2008 R2 SP1

Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 1

New Feature Evaluation Guide for Windows HPC Server 2008 R2 SP1

Windows HPC Server 2008 R2 SP1 Download

Windows HPC Server 2008 R2 Resource Kit

HPC Server Administrator Course

Windows HPC Server R2 Suite

Windows HPC Tech Center

Microsoft HPC Pack 2008 SDK and HPC Pack 2008 R2 SDK

MSDN Forums

Windows HPC Server Developers – General

Windows HPC Server Deployment, Management, and Administration

Windows HPC Server Academia and Research – General
Windows HPC Server Message Passing Interface (MPI)

Windows HPC Server Job Submission and Scheduling
Windows HPC Server UNIX – Interoperability

 

Virtual Labs

TechNet Virtual Lab: Scheduling Jobs with Windows HPC Server 2008 R2

TechNet Virtual Lab: Managing and Monitoring HPC clusters Using Windows HPC Server 2008 R2

MSDN Virtual Lab: HPCv2: Introducing the Visual Studio 2010 Parallel Debugger

MSDN Virtual Lab: HPCv2: Introducing the Visual Studio 2010 Parallel Performance Analyzer

MSDN Virtual Lab: HPCv2: Introducing the C++ Concurrency Runtime

Scaling out VBA Enabled Excel Workbooks with  Windows HPC Server 2008 R2

Introducing .Net 4.0 Parallel Extensions

Introduction to Parallel LINQ

 

Training

HPC Developer Training

 

Blogs

Windows HPC Team Blog

Wenming Ye (Technical Evangelist)

 

Books

Windows HPC Server 2008 R2: Step by Step (free)

Parallel Programming with MPI

 

Windows Azure Platform

Windows Azure Platform

Windows Azure Developer Center

Training and Events

 

MSDN Forums

Windows Azure Platform Development

Windows Azure Platform Troubleshooting, Diagnostics & Logging

Windows Azure Storage

Connectivity for the Windows Azure Platform

Managing Services on the Windows Azure Platform

Security for the Windows Azure Platform

Windows Azure Platform Purchasing, Pricing & Billing

SQL Azure

 

Virtual Labs

MSDN Virtual Lab: Windows Azure Native Code

MSDN Virtual Lab: Building Windows Azure Services with PHP

MSDN Virtual Lab: Getting Started with Windows Azure Storage

MSDN Virtual Lab: Building Windows Azure Services

MSDN Virtual Lab: Using Windows Azure Tables

 

Blogs

Windows Azure Team Blog

 

Books

Windows Azure Platform (SDK v1.2)

Programming Windows Azure (SDK v1.2)

 

Training

Windows Azure Platform Training Course (free)

PluralSight Azure Training

 

HPC and Azure

Deploying Azure Worker Nodes in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide

 

Whitepaper

HPC Server and Windows Azure

Creating a bootable VHD image

Whenever I first starting trying to figure out how to create a bootable VHD image and boot into it, I received plenty of links from people but it just seemed that no matter which link or advice I would follow, there was always something that just wouldn’t quite work with my specific hardware.  Of course, if anyone is ever going to have hardware issues, it always seems to be me!

What I finally came upon was a list of steps, a combination of instructions from various other sources that seem to work every time, no matter what (at least for my hardware).  A couple of customers requested that I post these steps, so here we go.  If you already have a procedure that works for you, that’s fine, this is just for those that need a concise list of steps.

The first thing to know is, to boot into an OS as a bootable VHD image, the OS needs to be either Windows 7 or Windows Server 2008 R2.  And of course you need to setup your hardware BIOS to allow for hardware virtualization.  From there…

1. Starting from scratch, what you will need to do is build a .vhd image that has an OS installed on it, but yet has not  had the Hardware Abstraction Layer (HAL) associated with it for the machine you are building it on.  I will use a tool called Wim2Vhd. This is a command line tool used to extract image information from the OS image on disk and add it to the vhd. You can find more information about this tool here: http://code.msdn.microsoft.com/wim2vhd.

2. Since I have an MSDN subscription , I decided to pull down an image (.iso) from there and then I extracted this .iso image onto a DVD. Make sure you get your product key because you will need that to activate Windows once you get booted into the new OS.

3.  I have a C:\Wim2Vhd directory on my machine with the necessary tools inside of it, so I will reference everything from this path.  In my case, I extracted a Windows Server 2008 Enterprise R2 .iso onto my F: drive at F:\W2008ISOs\Extracted. You can extract the ISO, or burn it onto a DVD. (A special note here, you need to put your bootable VHD onto a drive that is not Bitlocker encrypted to be able to boot into it).

4. Open a command prompt and change the directory to C:\Wim2vhd.

5.  In order to use the Wim2Vhd exe appropriately on the command line, we need to make sure that we get the ‘Name’ of the OS that we are going to be building our image with.  In order to get the name out of the extracted iso files, we use an .exe named Imagex which is located in the Wim2Vhd directory. You can get all these tools from the Windows Automated Installation Kit (AIK) for Windows 7 http://www.microsoft.com/downloads/details.aspx?familyid=696DD665-9F76-4177-A811-39C26D3B3B34&displaylang=en.  Type the following command at the command prompt:

Imagex /info F:\W2008ISOs\Extracted\Sources\install.wim

Imagex will look out onto my F: drive where I have the extracted OS install files at.

Notice that I point to a directory named Sources and the file that has the information in it is Install.wim. When I execute this command I get back something that looks like this:

================================================================================

ImageX Tool for Windows
Copyright (C) Microsoft Corp. All rights reserved.
Version: 6.1.7600.16385

WIM Information:
—————-
Path:        F:\W2008ISOs\Extracted\Sources\install.wim
GUID:      
Image Count: 1
Compression: LZX
Part Number: 1/1
Attributes:  0x8
             Relative path junction

Available Image Choices:
————————
<WIM>
  <TOTALBYTES>2434613007</TOTALBYTES>
  <IMAGE INDEX="1">
    <DIRCOUNT>13394</DIRCOUNT>
    <FILECOUNT>59593</FILECOUNT>
    <TOTALBYTES>10132413502</TOTALBYTES>
    <CREATIONTIME>
      <HIGHPART>…</HIGHPART>
      <LOWPART>..<LOWPART>
    </CREATIONTIME>
    <LASTMODIFICATIONTIME>
      <HIGHPART>..</HIGHPART>
      <LOWPART>…</LOWPART>
    </LASTMODIFICATIONTIME>
    <WINDOWS>
      <ARCH>9</ARCH>
      <PRODUCTNAME>Microsoftr Windowsr Operating System</PRODUCTNAME>
      <EDITIONID>ServerEnterprise</EDITIONID>
      <INSTALLATIONTYPE>Server</INSTALLATIONTYPE>
      <HAL>acpiapic</HAL>
      <PRODUCTTYPE>ServerNT</PRODUCTTYPE>
      <PRODUCTSUITE>Terminal Server</PRODUCTSUITE>
      <LANGUAGES>
        <LANGUAGE>en-US</LANGUAGE>
        <DEFAULT>en-US</DEFAULT>
      </LANGUAGES>
      <VERSION>
        <MAJOR>6</MAJOR>
        <MINOR>1</MINOR>
        <BUILD>7600</BUILD>
        <SPBUILD>16385</SPBUILD>
        <SPLEVEL>0</SPLEVEL>
      </VERSION>
      <SYSTEMROOT>WINDOWS</SYSTEMROOT>
    </WINDOWS>
    <NAME>Windows Server 2008 R2 SERVERENTERPRISE</NAME>
    <DESCRIPTION>Windows Server 2008 R2 SERVERENTERPRISE</DESCRIPTION>
    <FLAGS>ServerEnterprise</FLAGS>
    <HARDLINKBYTES>3549409854</HARDLINKBYTES>
    <DISPLAYNAME>Windows Server 2008 R2 Enterprise (Full Installation)</DISPLAYN
AME>
    <DISPLAYDESCRIPTION>This option installs the complete installation of Window
s Server. This installation includes the entire user interface, and it supports
all of the server roles.</DISPLAYDESCRIPTION>
  </IMAGE>
</WIM>

    ====================================================================================

What you are looking for is this line:

<NAME>Windows Server 2008 R2 SERVERENTERPRISE</NAME>

This name is important, because if you do not have a product key that matches this SKU, you will not be able to activate your OS.

6.  Now that you have the ‘Name’ of the software you will be using, we are going to use Wim2Vhd to create our Vhd image.  Again, make sure that the drive where you put your vhd is a drive that does not have BitLocker encryption turn on on it. Use this command

CSCRIPT WIM2VHD.WSF /WIM:D:\W2008ISOs\Extracted\sources\install.wim /SKU:ServerHpc /SIZE:51200 /DISKTYPE:Dynamic  /VHD:D:\W2008ISOs\W2008R2EntBase.vhd

This command will create a sys-prepped, Windows Server 2008 Enterprise R2 VHD that has a maximum, dynamically expanding size of 50GB. I chose 50GB because all I really intend to do from this image is run hyper-v images.  If you intend to use this OS to start installing a lot of software on, you may want to make it larger.

7. To create your boot menu, on the command line (the menu item string will be what you see in between the double quotes):

bcdedit /copy {current} /d "Win2008 R2 Ent"

8. A GUID will be output on the command line, copy this by right clicking in the command window and selecting Mark. Use these other command lines to create your menu item and set it up so it knows where to find the .vhd. You need to enter these commands and select Enter after each line entry.

bcdedit /set {paste_guid_here_including_braces} device vhd="[D:]\W2008ISOs\W2008R2EntBase.vhd"

bcdedit /set {paste_guid_here_including_braces} osdevice vhd="[D:]\W2008ISOs\W2008R2EntBase.vhd"

bcdedit /set {paste_guid_here_including_braces} detecthal on

bcdedit /v (…this is used to confirm that what you just did did modify the boot record)

 

The first time you boot into this image, the image will start reading the HAL and setting things up as if you were installing a new OS except for the fact that those files have already been installed.

Hope this helps someone out there achieve their bootable VHD goals!

Getting Started : ASPX Association and Initiation Forms for SharePoint 2007 Workflows

I realize that this topic may be somewhat of a bore for some of you out there that have already ventured down this road, but I have found that there are still many customers just now getting into SharePoint, and Workflow and then needing to create their own ASP.Net forms for association and initiation data.

The hardest thing for me a lot of times is trying to figure out where to go to find the information about this, and for those who are new to the topic, where do you first begin?

Luckily, SharePoint 2007 has been out for a while and the world is now looking toward SharePoint 2010 but, let’s not forget the mass of humanity that may not move there for a while…or those that will be moving eventually, but that need to get started right now with their workflows.

I have put together a short list of links that have really helped me out a lot getting started understanding the mechanics of creating these form types and their integration with SharePoint 2007 workflows.

First, you can start with a few articles on MSDN:

How to: Implement a SharePoint Workflow with ASP.NET Forms

http://msdn.microsoft.com/en-us/library/dd206915.aspx

Workflow Forms Overview

http://msdn.microsoft.com/en-us/library/ms457061.aspx

Creating an Application Page in WSS 3.0

http://msdn.microsoft.com/en-us/library/bb418732.aspx

Those articles begin to build a lot of information in your mind about the things you will need to do to your page.  However, the best step-by-step guide I’ve seen is at Robert Shelton’s blog:

http://rshelton.com/archive/2007/10/05/sharepoint-document-workflow-with-visual-studio-workshop-documents-download.aspx

Robert actually has an entire series of articles on SharePoint tutorials here that are very good.

As I went through this document, I stumbled upon something else quite interested.  As you are going through Roberts tutorial you might notice that there is code in there (such as for page-to-workflow serialization etc) that would be pretty common code for just about any type of ASP.Net form.  I then discovered Serge Lucas’ submission to CodePlex titled ‘Generic Framework for SharePoint Workflow aspx forms’ at http://aspxsharepointwf.codeplex.com/.

On this site, you will find sample code, the framework and a link to a screencast that demonstrates how to use the framework.

InfoPath : Retrieve and Send Data from/to a SharePoint 2007 Workflow

Yes, I know this title sounds a bit odd, and the real title should be ‘When you are using an InfoPath task edit form, how do you use InfoPath data connection bindings to both retrieve and send data to a SharePoint 2007 workflow?’

So one day I was building an InfoPath 2007 task edit form and I wanted to send data to this form from inside of my workflow by using something like:

taskProperties.ExtendedProperties[“txtProductDescription”] = “Product ABC”;

I implemented an ItemMetadata.xml file with an ‘ows_txtProductDescription’ field, added a secondary data connection for it in my InfoPath form and followed all the normal steps and when I opened the form when my task was created, the data was there as expected.

However, what I didn’t realize was that when I closed the form and completed the task and then re-opened the form just to look at my previous settings, the data was gone!  The form appeared as if it were a new form with default settings.  To make matters worse, inside of the workflow I discovered that I could not use the task After properties to get the data out of the txtProductDescription field, it was always null.  And this is where my multi-day odyssey of searching for the answer began.

To make things simple here, I’m just going to start with a simple form that has no binding and show you how to set things up.  I hate to admit it took me several days and talking to several people to figure this out, so I hope it helps you shorten your path to a solution also.

Technologies Used: 

  • Microsoft Office SharePoint Server 2007 Enterprise Edition, SP1
  • InfoPath 2007 – Browser enabled forms
  • Visual Studio 2008 SP1 / C#

To begin with, here is my simple form entitled OrderRequestTask.xsn

 

image

I have three InfoPath controls here here, a Textbox named ‘txtProductDescription’, a Drop-Down List named ‘drpStatus’ and a button for submitting the data.   This is a browser enabled form.

Create the ItemMetadata.xml file

Before we create our data connections we will need to create the ItemMetadata.xml file.  This is what the file will look like:

image

Note the ‘ows_’ in front of the field names.  This is a required prefix. Make sure you save this file as an XML file.

Setting Up the Data Connections

I will be using two data connections:

Update – this is a data connection that is used to submit the control(s) data into the workflow.  Follow these steps to setup this data connection:

1.  On the Design Task link (the Design Tasks toolbox should be on the right hand side of the form in InfoPath), select Data Source.

2. Select the Manage Data Connections… link.

3.  In the Data Connections dialog box, select Add.

4.  Select the Create a New Connection to and then Submit Data radio button.

5.  Select Next.

6.  Select ‘To the hosting environment, such as an ASP.Net page or a hosting application’ radio button.  Basically at this point, InfoPath has no idea we are dealing with SharePoint, what we are saying is that we are going to submit the data back to whatever host opened the form, which will be the SharePoint task list.

7.  Change the name of the data connection from Submit to Update and select Finish.  This will complete your ‘main’ data connection.  You can only have one main data connection and this is the only connection that can submit data to a form or other entity.

ItemMetadata – this data connection will read from an XML file named ItemMetadata.xml that will contain a reference to the controls on the form that can receive data from the workflow.  It is very important here that you name this xml file EXACTLY ItemMetadata.xml, spelling is important, case is important, format is important….if any of these are incorrect, the data connection will not work.  Another point, each form (whatever type of form it may be) that is receiving data from the workflow will have it’s own ItemMetadata.xml file, named ItemMetadata.xml so what I do is I have a separate directory for each form setup.  Now technically, you could have a single ItemMetadata.xml file with several fields and only use the ones you need but that could get a bit confusing down the road.

8.  The Data Connection dialog box should already be open, if not, select Manage Data Connections again to open it up.

9.  In the Data Connections dialog box, select Add.

10.  Select the Create a New Connection to and then Receive Data radio button.

11.  Select Next.

12.  Select ‘XML Document’.

13.  Select Next.

14.  Browse to where your ItemMetadata.xml file is and select it.

15.  Select Next.

16.  Make sure the ‘Include the data as a resource file in the form template or template part’ radio button is selected.  What this means is that the ItemMetadata.xml file is actually going to become a part of the InfoPath form itself.  Therefore, if you were to move this file after you get your form published, it would not matter.  If you ever update your ItemMetadata.xml file you will also need to go through the steps above to refresh your data connection anyway.

17.  Select Next then Finish.  Now your Data Connections dialog box should look like this:

image

 

Binding the Controls to the Data Connections

As it stands right now, the way I did the drag and drop of the fields onto the form and setup my main data connection, made it so that both of these fields will submit their data into the workflow without any other modification from me.  However, I want to be able to both receive data from the workflow AND submit the data to a workflow.   This is where I spent so much time trying to figure out essentially, how to bind the controls to both data sources.

If you right click on the txtProductDescription field and select ‘Change Binding’ you will see the following dialog box:

image

You can see here that this field is data bound to the Main data source.  You could of course choose the data source dropdown and then choose the ItemMetadata (secondary) data source but then the control will only receive data from the workflow and will not submit data to the workflow.  Perplexing right?

To correct this situation, do the following:

1.  Make sure the Text Box Binding dialog box above is closed.

2.  With the txtProductDescription box still selected, move over to the Design Tasks toolbox and make sure the Data Connections window is open like below:

image

3.  Right-click on txtProductDescription.

4.  Select Properties:

image

5.  In the Field or Group Properties dialog box, select the Fx (function) button:

image

6.  In the Insert Formula dialog box, select the Insert Field or Group button:

 image

7.  From the Select Field or Group dialog box, select the Data Source drop-down and select ItemMetadata (Secondary) menu item.

8.  Select the ows_txtProductDescription field and then select OK

image

9.  Select OK in the Insert Formula field and then OK and the Field or Group Properties dialog box.

10.  Repeat steps 2 – 9, except choose the drpStatus drop-down field in the form.  Note that if you don’t want or need to set this value from the workflow, you can just leave the binding as is.

11. Now you can save and publish your form.  Since this is a task edit form and the workflow is going to determine where/how to pickup this file via it’s configuration files (feature.xml and workflow.xml), we will be publishing this form to a network location.

Publishing the Form

1.  Select the File –> Publish menu item.

2.  Select ‘To a network location’ then Next.

image

3.  Browse to where you want to publish the file and select Next.

4.  On the this page, make absolutely sure you clear out the text field in this window.  If you do not, SharePoint will more than likely not let anyone open the form due to permissions.

image

5.  Select Next.  You will see a dialog box warning you of possible user access problems. Just select OK.

6.  Select Publish.

7.  Select Close.

At this point, I will defer to the experience of the student at to where they put the form so that the workflow can access it.  More than likely you will have a sub-directory under your workflow projects feature directory to put the form in.  You will then need to re-deploy your workflow project.

 

What about the code in the workflow itself?

Inside of the workflow in my CreateTask handler, I will have code that looks like this:

   1:  this.taskProperties.ExtendedProperties["ows_txtProductDescription"] = this.productDescription;
   2:  this.taskProperties.ExtendedProperties["ows_drpStatus"] = "Completed"

In my OnTaskChanged handler, I would have code that looks something like this:

   1:  string strProdDesc = this.taskAfterProperties.ExtendedProperties["txtProductDescription"].ToString();
   2:  bool statusComplete = this.taskAfterProperties.ExtendedProperties.ContainsValue("Completed");

A couple of things to note in the OnTaskChanged event is that I do not include the ‘ows_’ prefix in the field name and I am able to use ‘ContainsValue’ to get to the boolean status value.

 

In conclusion, I hope this helps someone out there prevent the time loss I had when attempting to figure this out.  And as I’ve said before, if you know a better way of doing it, let me know!