I’ve got a bunch of pet websites, some hosted in Azure, others with an old school hosting provider. I don’t think these other non-Azure sites are going to be transferred any time soon either by virtue of the fact you typically get an email server with your web hosting package, something sadly lacking from Azure.

Most of the source code for these site is in GitLab. Now, I like both Azure DevOps and GitHub but at the time these sites were created both of these cost money for private repos and despite that changing recently I don’t really see a compelling argument to shift the code.

What I don’t have for some of the sites (and would be useful) is to set up continuous integration and/or deployment to build, publish and deploy them.

I’ve split this post into two parts. This first part will describe the situation where the source code is in GitLab, the website hosted with a shared hosting provider and the deployment process itself is handled using Azure DevOps.

GitLab Setup

Not much to do here, all you need is an Access Token. To get one log in to your GitLab.com repo and go to your User Settings. In the left hand menu select Access Tokens.

Create GitLab Access Token

Give the access token a name and then select the following permissions:

  • api
  • read_user
  • read_repository
  • read_registry

Click the “Create personal access token” button. It’s important to take a note of the token’s value here as it’s NOT shown again when you come back to this page. If you do lose it then you can revoke the existing token and generate a new one.

That’s all you need to do within GitLab.

Hosting Provider Setup

Depending on your hosting provider you may have a few options for deploying a website. In my case I’ve got the choice of Microsoft’s Web Deploy or good old FTP.

I’ve used Web Deploy in the past and it works well when you’re doing a quick and dirty deployment straight from Visual Studio to the web server. I’ve got the slight complication of having CloudFlare sitting in front of the website which is great for performance and also gives you the option of HTTPS for free even when your hosting provider may not support that. The reason it’s complicated is that Web Deploy operates over port 8172 by default and this isn’t a port that CloudFlare forwards traffic for. So traffic for, let’s say, https://www.example.com would be forwarded (on port 443) butt traffic for https://www.example.com:8172/msdeploy.axd?site=example.com wouldn’t.

Now it might be possible to talk nicely to your hosting provider and ask them to offer the Web Deploy service on either a different port or alternatively on a sub-domain – and then ensure that CloudFlare doesn’t proxy traffic for this specific sub-domain. Neither of these were an option for me so we’re left with FTP. All you need from your hosting provider are some credentials for connecting to your website via FTP.

Azure DevOps Setup

Service Connections

Now that we’ve got a way of connecting to both our source control provider and our hosting provider we can configure DevOps to pull the source code, build and publish, then copy the published files to the website’s location.

Log in to your Azure DevOps organisation site. If you don’t have one you can sign up for free. Open (or create) your project and then open the Project Settings page – the link is in the bottom left of the page at time of writing.

On the Service connections screen click New service connection and select Other Git.

Other Git Service Connection

Click Next.
Enter the URL to your GitLab repository. In the Authentication section paste the access token you saved from earlier into the Password/Token field. The username isn’t required.
Give the connection a name and Save.

Git Service Connection Details

Now we want to create another connection, this time to the FTP site. Click New service connection again, this time choosing Generic.

Generic Service Connection

Enter the URL of your FTP site, the credentials required to connect, give the connection a name and hit Save.

Generic Service Connection Details

Now on your Service connections screen you should see both connections.

Service Connections

Now we’ve got our service connections configured we can set up our pipelines – one for the build/test/publish process and one for the release process.

Build Pipeline

Open your Pipelines screen in DevOps and click the New pipeline button.
Select the Other Git option.

New Pipeline Connection

On the following screen the default options should be correct. The source should be Other Git and DevOps should have picked up your previously created GitLab connection. Ensure the correct branch name and click Continue.

New Pipeline Repository

Next, we need to select the template for the pipeline. As I’m working with a .NET Core web app I’m selecting the ASP.NET Core template. Hover over your chosen template and then click the Apply button.

New Pipeline Template

Here’s what that template (at time of writing) looks like.

New Pipeline ASP.NET Core Template

I’ve given it a name of CI Build. Feel free to configure the rest of the pipeline and job steps as necessary for your project. The default setup works just fine for this project.
The final step in the process, Publish Artifact, picks up the published files from the previous Publish step and copies them to a drop folder that we can access in the release pipeline.

Click Save & queue to save your pipeline and kick off a build.

While that’s running open the Releases page in DevOps and click New release pipeline. On the template selection screen click the Empty job link at the top.

New Release Pipeline Template

In the resulting screen you can change the name of the stage if you wish. Stage 1 works for me though.

New Release Pipeline Stage

On the canvas click the + Add link next to Artifacts.

Your project should be pre-selected. Choose the build pipeline we’ve just created and feel free to change the source alias.

New Release Pipeline Add Artifact

Click the Add button.

Now back on the canvas, hover over the click the “1 job, 0 task” link under Stage 1.

New Release Pipeline Stage Tasks

On the stage screen click the + button in the Agent job box.

New Release Pipeline Add FTP Upload Task

Find the FTP Upload task and click the Add button to add to the stage tasks.

The task will be added to the list – click on it to configure.

FTP Upload Task

There’s a few things to set here:

  • The name if you want to change it.
  • Select the FTP service connection created earlier in the FTP Server Connection drop down.
  • The Root Folder should be set to the drop folder or a folder within that contains the files you want to copy to the FTP site.
  • Update the File patterns if necessary – I’m just copying everything.
  • Update the Remote directory if your deploying into a sub-folder on the FTP site.
  • Set the advanced options as required – I’ve set Preserve file paths as otherwise the folder structure gets flattened on copying.

Once your happy with the settings click the Save button in the top toolbar, choosing the folder where you want to save it (default root folder works in most cases) and a comment.

Now that you’ve done that you can perform the deployment by clicking the Create release button in the toolbar and then the Create button in the resulting popup.

You can click the Release-1 link at the top or navigate in back through the Releases screen to view the progress.

Release Succeeded

To view logs of the full process, click on the Succeeded link in the Stage 1 box.

Release Succeeded Logs

Job done!

You’ve successfully pulled your source code from GitLab into Azure DevOps, built and published it then copied the published site to your FTP server.

Now given that this is just plain old FTP there’s a chance that if the site has a fair amount of traffic some of the files might be in use and therefore the upload may fail. There’s probably a few ways to get round this. Depending on your hosting company you may be able to connect to an API to take the site offline. Alternatively you could have a multi-stage release pipeline that uploads the app_offline.html file to the server first, then performs the copy and finally deleted the app_offline.html file.

I’ve not tried any of these approaches – I’ve only seen errors a couple of times so far and I’m happy enough to re-deploy manually when I get the email from Azure DevOps telling me the pipeline has failed.

In Part 2 I plan to do something similar where the source code is also in GitLab but I make use of the GitLab.com CI/CD pipelines to build and subsequently deploy to an Azure App Service.

Introduction

There are a few posts already out there dealing with this but I couldn’t find one that managed to cover all the steps in enough detail for me. So this post will attempt to rectify that.

What do we need?

  • A web site to test. I’ve got a pretty bare-bones ASP.NET Core web application (just using the Visual Studio 2019 template) that I’ve been playing around with recent versions of Entity Framework Core in. I’ll write the tests against the home page of that site.
  • A test project. For convenience I’ll be adding the test project to the same solution as the web application.
  • A release pipeline. For unit and integration tests you’d normally run these as a step in a build pipeline, preventing the deployment if the tests fail. For UI tests though I need something to be deployed in order to test it. I’ll be adding the test step to a release pipeline.

The web application

As stated, I’m using the template ASP.NET Core web application in Visual Studio 2019. Here’s the project structure. I’ve added some data context related stuff but you don’t need any of that for the purpose of these tests.

Visual Studio project structure

Debug this project and you should see something resembling the following.

Web application home page

And that’s it. Now I need to publish this somewhere. I’m lucky enough to have an Azure subscription at my disposal currently so that’s where this will live. Of course the site can live anywhere publicly accessible.

Let’s move on to writing some tests for this web application.

The test project

First, I’ll create a new project in our existing solution. Right click on the solution in the Solution Explorer pane and select Add > New Project….

In the first dialogue select Class Library (.NET Core) then click Next.

New project type

In the next dialogue give your project a name and a home. Click Create.

New project name

Visual Studio will open up the newly created Class1 class. Rename this to something sensible (I’ve chosen HomePageTests) and lets start adding the dependencies.

I’ve added the following nuget packages:

  • FluentAssertions (Version=”5.6.0″)
  • Microsoft.AspNetCore.TestHost (Version=”2.2.0″)
  • Microsoft.NET.Test.Sdk (Version=”16.1.1″)
  • MSTest.TestAdapter (Version=”2.0.0″)
  • MSTest.TestFramework (Version=”2.0.0″)
  • Selenium.Support (Version=”3.141.0″)
  • Selenium.WebDriver (Version=”3.141.0″)
  • Selenium.WebDriver.IEDriver (Version=”3.141.59″)
Visual Studio tests project

Before I write the actual test method I’m going to add some setup and tear down methods in the class. Not strictly necessary for a class with a single test method but as the test project grows it’s useful to have this stuff in a single place.

First up, the using statements – I hate it when you see code posted for a class or method without these included.

using FluentAssertions;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.IE;
using System;
using System.IO;

Next, decorate the class with the [TestClass] attribute.

namespace EfCoreSandbox.Tests
{
    [TestClass]
    public class HomePageTests
    {
    }
}

Now I can add some variables. The webAppBaseUrl is the URL for the publicly accessible website. You wouldn’t normally hard-code this but rather have it in some kind of configuration store but this will do for now.

private const string webAppBaseUrl = "https://efcoresandbox.azurewebsites.net/";

The IWebDriver is the interface that all the Selenium web drivers (the things that open up and actually drive the browsers) implement.

private static IWebDriver driver;

Next up, ClassInitialise. This runs, provided it’s been decorated with the [ClassInitialize] attribute, once before running the tests of the class. In this method I call a method to set up the web driver and then navigate to the web application’s URL.

[ClassInitialize]
public static void ClassInitialise(TestContext testContext)
{
    SetupDriver();
    driver.Url = webAppBaseUrl;
    driver.Navigate();
}

Then, on the flip side, the ClassCleanup method. As you’d imagine this runs once and after all the other test methods have been run.

[ClassCleanup]
public static void ClassCleanup()
{
    TeardownDriver();
}

Here’s the SetupDriver method called in the ClassInitialise method. I won’t go in to too much detail here as this isn’t a post about Selenium testing but this method configures some options for the IEWebDriver (if you have to test IE automate it – otherwise you’ll need to use IE and nobody wants that). I get the path to the web driver executable – this differs between the local machine and the Azure platform, environment variable on Azure, local directory on the local PC. Lastly set the driver variable to a new InternetExplorerDriver. If any of this fails call the TeardownDriver method to clean up.

private static void SetupDriver()
{
    try
    {
        InternetExplorerOptions ieOptions = new InternetExplorerOptions
        {
            EnableNativeEvents = false,
            UnhandledPromptBehavior = UnhandledPromptBehavior.Accept,
            EnablePersistentHover = true,
            IntroduceInstabilityByIgnoringProtectedModeSettings = true,
            IgnoreZoomLevel = true,
            EnsureCleanSession = true,
        };

        // Attempt to read the IEWebDriver environment variable that exists on the Azure
        // platform and then fall back to the local directory.
        string ieWebDriverPath = Environment.GetEnvironmentVariable("IEWebDriver");
        if (string.IsNullOrEmpty(ieWebDriverPath))
        {
            ieWebDriverPath = Path.GetDirectoryName(AppDomain.CurrentDomain.BaseDirectory);
        }

        driver = new InternetExplorerDriver(ieWebDriverPath, ieOptions)
        {
            Url = webAppBaseUrl
        };
    }
    catch (Exception ex)
    {
        TeardownDriver();
        throw new ApplicationException("Could not setup IWebDriver.", ex);
    }
}

And the TeardownDriver method. Pretty simple – just clean up the resources of the driver.

private static void TeardownDriver()
{
    if (driver != null)
    {
        driver.Close();
        driver.Quit();
        driver.Dispose();
        driver = null;
    }
}

Time to write a test. I’m keeping it simple and am just going to test that the home page has an <h1> heading tag that contains the string “Welcome”.

[TestMethod]
public void HomePageHeadingContainsWelcome()
{
    // Arrange/Act/Assert
    driver.FindElement(By.TagName("h1")).Text.Should().Contain("Welcome");
}

And that’s it, put all this together and you should be able to run the test. If you don’t already have the Test Explorer window open in Visual Studio open it from Test > Windows > Test Explorer.

Visual Studio Test Explorer

In the Test Explorer pane click the Run All button – left most button in the toolbar. The test engine will chug away for a bit opening Internet Explorer and loading the web application and all else being well the test will pass.

Visual Studio Test Explorer Passed Tests

Next, let’ set up the release pipeline to run these automatically.

The release pipeline

While the UI tests will run as part of a release pipeline, that pipeline will pick up and deploy the output of a previous build pipeline. So lets get that set up first of all.

In your Azure DevOps portal, open the project and then select Pipelines in the left hand menu.

Pipelines

Click the Create Pipeline button. I’m not yet a fan of YAML so click the “Use the classic editor” link.

Choose the correct settings for your source repository, for me this is an Azure Repos Git repository. Team project and Repository are both EF Core Sandbox and I’m basing it on the master branch.

New Pipeline

Click on the Continue button. I’ve chosen the built-in ASP.NET Core template but feel free to choose something more appropriate to your needs.

ASP.NET Core template

Click Apply to continue. Now I need to do some tweaking to the pipeline.

At the pipeline level I’ve renamed the pipeline to EF Core Sandbox CI and changes the Agent Specification to windows-2019. Also, given we have no unit tests, I’ve cleared the Project(s) to test field and removed the Test step in the pipeline. The pipeline should look like this for the moment.

Initial build pipeline

Now I’m going to split the existing Publish task in two. Currently it will publish both the web applications and test projects as zip files. That’s not going to work for the test project so split them up. I’ve renamed the Publish task as Publish Web App and added a new Publish Tests task, configured as follows.

Publish Tests task

Importantly, uncheck the Zip Published Projects and Publish Web Projects checkboxes then set the Path to Project(s) field as the path to your tests project only. Mine is set to “**/EfCoreSandbox.Tests.csproj“.

Set the Arguments field to “–configuration $(BuildConfiguration) –output $(build.artifactstagingdirectory)“.

Click Save & queue to run the build. After a while your Pipelines screen should resemble the following.

Pipeline success

The last major part of this process is to set up the release pipeline. Click on Releases in the left hand menu and then on the New pipeline button. In the “Select a template” panel click the “Empty job” link.

I’ve renamed the stage to Deploy web app and closed the Stage panel.

New release pipeline

In the Artifacts panel click + Add. Choose Build as the source type and then select the appropriate project and build pipeline previously configured. Click the Add button.

Add build artifact

In the Stage panel click the Deploy web app stage to configure it. Under Agent job click the + (plus) button and add an Azure App Service deploy task. Set this up to publish to your Azure app service. I won’t go into too much detail here as your configuration will inevitably differ from mine.

Azure App Service deploy task

Click the + (plus) button again and this time add the “Visual Studio Test Platform Installer” task. This task should need no additional configuration.

Visual Studio Test Platform Installer task

Click the + (plus) button again and the last task to add is a “Visual Studio Test” task. For the Test files field I’ve specifically chosen the EfCoreSandbox.Tests.dll to run and I’ve specified that the Text mix contains UI tests. For the Search folder I need to be more specific with the location – changing it to $(System.DefaultWorkingDirectory)/_EF Core Sandbox CI/drop

Visual Studio Test task

Give the release pipeline a name (I’ve chosen EF Core Sandbox Release) and save it.

In the real world I’d consider running this pipeline in response to a trigger but for now just click Create release in the toolbar (top right) and then on the Create button.

Release created

Click on the Release-1 link that appears at the top of the screen. Mine is Release-3 because I’ve been mucking about with the pipelines, generally getting things wrong, and have created a few more test releases.

Deploy Succeeded

Hover over the Deploy web app box and then click on the Logs button when it is shown. In the Logs screen click the VsTest – testAssemblies item in the list. Scroll down a bit and you should see that the HomePageHeadingContainsWelcome did indeed pass.

Passed UI Test

So there you have it, Selenium UI tests running as part of your deployment pipeline.

Epilogue

This is all great, but, what about when the tests fail. Let’s face it, if you knew all the tests were going to pass every time would you bother writing them?

Let make the test fail. In the web application project change the content of the <h1> element. I’ve opted for “EF Core Sandbox“. If you’ve got your UI tests project configured to be able to run against the local web application you can run the UI tests locally to confirm that they fail.

Next, I’ll update the HomePageTests class to provide me with some additional information when a test fails. In the class add the following variable declaration.

private static TestContext testContextInstance;

Now update the ClassInitialise method to set the variable.

[ClassInitialize]
public static void ClassInitialise(TestContext testContext)
{
    testContextInstance = testContext;
    SetupDriver();
    driver.Url = webAppBaseUrl;
    driver.Navigate();
}

And add the following method to the class.

private static void TakeScreenshot(string fileName)
{
    Screenshot ss = ((ITakesScreenshot)driver).GetScreenshot();
    string path = Path.Combine(Directory.GetCurrentDirectory(), fileName);
    ss.SaveAsFile(path);

    testContextInstance.AddResultFile(path);
}

Finally, for the test class, add the following TestCleanup method that will be executed after every test method.

[TestCleanup]
public void TestCleanup()
{
    if (testContextInstance.CurrentTestOutcome != UnitTestOutcome.Passed)
    {
        TakeScreenshot($"{testContextInstance.TestName}.png");
    }
}

As you’ve probably guessed these code additions will take a screenshot of the web page if the previous test has failed and then add the file as a result file to the test context instance.

Go ahead and push the changes into source control then kick off a new build – unless your build pipeline is already CI triggered. Once the build has run (and succeeded) create a new release. Once the release run has completed you should see something slightly different, if not unexpected.

Failed deployment

Drill down into the logs as before and you’ll be able to see what went wrong.

Failed test logs

As you’d expect we can see that the test failed with the message: Expected string “EF Core Sandbox” to contain “Welcome”.

What about the screenshot though. If you click Tests Plans in the left hand menu and then Runs you’ll see the list of completed test runs – the latest failed one should be at the top.

Tests runs

Double-click (yep, I know) on the latest run with the warning icon to see the run summary. From there click the Test results link just above the toolbar and under the Run number. This will list all the tests that have failed. Since we only have one double-click the HomePageHeadingContainsWelcome test. Here you’ll get the error message and stack trace for the failed test along with, about halfway down the page, an Attachments section that should have one file.

Test result

Clicking on the attachment name will download the file. Open the download to view a screenshot of the web page at the point the test failed.

Web application screenshot

That’s it. If you want the code but don’t want to copy and paste all of above sections individually here’s a link to the complete HomePageTests class. https://gist.github.com/stuartwhiteford/bc21df9e1b98785beef0a6ed66b8c4f8

Happy testing!

Introduction

A slight change in tack for this post. I’ve been getting more involved in continuous integration and testing recently (which I’ve decided is a good thing) and two of the tools we’ve been using are TeamCity (which I’ve also decided is a good thing) and FitNesse, more specifically dbFit (which I’m still undecided on). The relative pros and cons of each are, thankfully, out with the scope of this post.

What I will be showing is how you can run a FitNesse test suite from a TeamCity build configuration, with the individual test results available in the Tests tab of the build and the FitNesse test report in HTML as a build artifact. I’d like to show more screenshots than I’m about to but our TeamCity installation contains a fair amount of client names, and I can’t be bothered to blur them all out in Fireworks. Also there’s more red than there should be on the Overview page.

Assumptions and Pre-requisites

  • You’ve been working with both TeamCity and FitNesse or at least know how they work.
  • You have an existing TeamCity installation with a project and a build configuration that you can modify.
  • You have an existing FitNesse installation that’s callable from your TeamCity server.
  • You have the TestRunner.exe for FitNesse, found in the dotnet folder, provided you have it.
  • You’re using TeamCity 6.5.6 or can figure out the equivalent steps for your version.

Instructions

First up, download the junit.xslt file and place it in your FitNesse dotnet folder.

Open up your TeamCity home page. Click on the relevant project, click the Edit Project Settings link and then select the Parameters tab. We’re going to add some environment variables specific to the FitNesse environment. Add four Environment Variables with the following names and values.

Name Value
env.fitnesse.lib The path to the directory containing the TestRunner.exe. e.g. C:\Fitnesse\lib\dotnet2
env.fitnesse.port The port number that your FitNesse server is on. e.g. 8085
env.fitnesse.server The name of your FitNesse server. e.g. fitvm01
env.fitnesse.suite The full name of the test suite you want to execute. e.g. FitNesse.StuartWhiteford.TestSuite

Now, select the General tab and then click the Edit link next to your chosen build configuration. On the General Settings page add the following line to the Artifacts path field.

%system.teamcity.build.checkoutDir%\dbfit.results.html

Our build step will ensure that the FitNesse results are saved as dbfit.results.html to the checkout directory.

Next, click on the Build Step(s) menu item for the build configuration and click the Add build step link. In the New build step page select Command Line as the runner type and enter a sensible name for the step (Run FitNesse Tests). In the working directory enter

%env.fitnesse.lib%

, select Custom script in the Run field and finally enter the following lines in the Custom script field.

TestRunner.exe -results %system.teamcity.build.checkoutDir%\dbfit.results %env.fitnesse.server% %env.fitnesse.port% %env.fitnesse.suite%
java -cp ..\fitnesse.jar fitnesse.runner.FormattingOption %system.teamcity.build.checkoutDir%\dbfit.results xml %system.teamcity.build.checkoutDir%\dbfit.results.xml %env.fitnesse.server% %env.fitnesse.port% %env.fitnesse.suite%
java -cp ..\fitnesse.jar fitnesse.runner.FormattingOption %system.teamcity.build.checkoutDir%\dbfit.results html %system.teamcity.build.checkoutDir%\dbfit.results.html %env.fitnesse.server% %env.fitnesse.port% %env.fitnesse.suite%
java com.sun.org.apache.xalan.internal.xsltc.cmdline.Compile %env.fitnesse.lib%\junit.xslt
java com.sun.org.apache.xalan.internal.xsltc.cmdline.Transform %system.teamcity.build.checkoutDir%\dbfit.results.xml junit > %system.teamcity.build.checkoutDir%\dbfit.results.junit.xml

Line by line, this script will perform the following actions:

  1. Call the TestRunner executable outputting the results to a file called dbfit.results in the checkout directory, passing in the FitNesse server, port and test suite to run. This is the command that actually runs the tests.
  2. Format the dbfit.results output as XML. This will allow us to see the status of each of the tests in the build.
  3. Format the dbfit.results output as HTML. This will become the FitNesse test report artifact for the build.
  4. Compile an XSLT file that we will use to transform the FitNesse XML to JUnit XML (a format that TeamCity understands).
  5. Perform the transform, saving the results as dbfit.results.junit.xml in the checkout folder.

Your build step page should look something like the following.

Build Step

Click the Save button to save the new build step. Back on the build steps page click the Add build feature link. In the dialogue select XML report processing as the feature type and Ant JUnit as the report type. In the Monitoring rules fields enter

%system.teamcity.build.checkoutDir%\dbfit.results.junit.xml

and lastly check the Verbose output field. Click the Save button.

If your build is that’s one that can be run manually go ahead and run it, then watch the progress on the Overview page.

Running Build

Once the build has completed, click on the Tests passed (hopefully) link and then on the Tests tab to view the status of the individual tests within the FitNesse suite.

Tests

To view the FitNesse test ouput page, click on the Artifacts tab. You should then see a link to dbfit.results.html. You can also get to here from the Artifacts context menu in the Overview page.

Artifacts
FitNesse

That’s all there is to it. Note that you’re not limited to a single build step and test suite with this method. You could run a single test with one build step; have multiple build steps running a single test or an entire suite.

Conclusion

I’m still unconvinced with FitNesse and dbFit as a testing framework, perhaps because I’ve also spent much time recently with WatiN and Selenium, but at least now it’s part of our big happy CI family. Just.

It’s been about 15 months since my last post, the main reason for the delay would be my 14 month old son :-).

I’ve been working primarily with event and feature receivers in MOSS 2007 for the last few weeks and this post will describe some of the issues that I encountered and their solutions (or workarounds) if they exist mainly for my own benefit, as my memory doesn’t seem to be what it used to, but I wouldn’t be unhappy if it helps someone else out.

VseWSS Project Template Grumps

These are generally fairly good. I use them as much as possible as it cuts down on the amount of bolierplate code you need to type and on the amount of faffing getting the right GUIDs in the feature.xml, element manifest, class attributes, etc.. However they’re not perfect by any means, and for me it’s becasue of the following reasons:

  • You get item and list event receivers but not an e-mail receiver. Now I don’t suppose they’re all that common but the option to add one in after the project has been created would be nice, we can at least do that with feature receivers albeit with a small amount of faffing (see next point).
  • To me the point of adding a feature receiver to a list definition project would be to have it’s code run when you activate/deactivate the list or just as likely an instance of the list, but when you add one you need to manually add the ReceiverClass and ReceiverAssembly attributes to the feature.xml of the feature you want the code to run against (including getting the public key token of the assembly).
  • In the WSP View, after you’ve made some changes to the project the reference to the Receiver class file goes AWOL and never comes back. You can obviously still get to it in the Solution Explorer but it’s just a bit annoying, particularly when it used to be there.
  • In the list definiton schema.xml file you add your custom fields, set the ShowInNewForm and ShowInEditForm attributes to TRUE, package and deploy the solution and when you create a new item from the list your field doesn’t display. The way round this is remove the element from the schema (as the built-in Item content type doesn’t have your custom fields). This one isn’t that big a deal but I still spent about half an hour on Google finding out why the fields were not displaying.

E-mail Event Receiver and the Windows SharePoint Services Timer Service

After implementing the e-mail event receiver, packaging and deploying you notice there’s a problem with the execution, so you deactivate and uninstall the feature, retract and delete the solution, modify the code, re-package and re-deploy. Make the receiver fire again and the same problem exists. The reason for this is that the receiver code is executed by the OWSTIMER process (the Windows SharePoint Services Timer Service) and it doesn’t use the shiny new DLL you just deployed to the GAC, it creates it own local copy and won’t use the updated assembly until the service is restarted, so at the command line:
net stop “Windows SharePoint Services Timer”
net start “Windows SharePoint Services Timer”
and you should see that the the receiver uses the updated DLL.

Enable Incoming E-mail on Document Library on Feature Activation

I’ve saved the worst for last, this one caused me no end of pain. I have a list definition project that sets up a library based on a document library, the project has the list definition, list instance, item and list event receivers. I subsequently added a feature receiver for the purpose of enabling incoming e-mail on the library and setting the e-mail alias. Simple enough looking code to do this (if you need it you can find it here). However when I deployed the solution (using STSADM) I noticed an error message in the command window: “Error in the application”. Wow Microsoft, you’ve really excelled yourself with the verbosity of that one! Looking at the log file there was an exception:
{xxxx.xxxx.SharePoint.Lists.xxxxxxxxxxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.FeatureActivated} : Exception Message : Error in the application., Exception Stack Trace : at Microsoft.SharePoint.SPList.UpdateDirectoryManagementService(String oldAlias, String newAlias) at Microsoft.SharePoint.SPList.Update(Boolean bFromMigration) at Microsoft.SharePoint.SPList.Update()….

UpdateDirectoryManagementService? So it failing when trying to set the e-mail alias. OK so a bit of Googling and I’ve found this thread, which says that unless you run the code using an Application Pool account it plain won’t work. So we need to ask clients to log in using an Application Pool account to be able to install the feature. I think not…

Enter the RunAsSystem method.

If you use the SharePoint object model a lot you’ll be familiar with the RunWithElevatedPrivileges method, the RunAsSystem uses the same principal but in my opinion in a cleaner way. It’s been about 4 or 5 months since I originally started this admittedly short post so I have forgotten where I found this class, but here it is, and if I find out from whence it came I’ll get round to editing this post. Anyway, to use this create a method that taks an SPSite object as a parameter that you want to run using an elevated account and call it thusly: site.RunAsSystem(MyMethod);

 
    public static class SPSiteExtensions
    {
 
        public static SPUserToken GetSystemToken(this SPSite site)
        {
            SPUserToken token = null;
            bool tempCADE = site.CatchAccessDeniedException;
            try
            {
                site.CatchAccessDeniedException = false;
                token = site.SystemAccount.UserToken;
            }
            catch (UnauthorizedAccessException)
            {
                SPSecurity.RunWithElevatedPrivileges(() =>
                {
                    using (SPSite elevSite = new SPSite(site.ID))
                        token = elevSite.SystemAccount.UserToken;
                });
            }
            finally
            {
                site.CatchAccessDeniedException = tempCADE;
            }
            return token;
        }
 
        public static void RunAsSystem(this SPSite site, Action<SPSite> action)
        {
            using (SPSite elevSite = new SPSite(site.ID, site.GetSystemToken()))
                action(elevSite);
        }
 
        public static T SelectAsSystem<T>(this SPSite site, Func<SPSite, T> selector)
        {
            using (SPSite elevSite = new SPSite(site.ID, site.GetSystemToken()))
                return selector(elevSite);
        }
 
        public static void RunAsSystem(this SPSite site, Guid webId, Action<SPWeb> action)
        {
            site.RunAsSystem(s => action(s.OpenWeb(webId)));
        }
 
        public static void RunAsSystem(this SPSite site, string url, Action<SPWeb> action)
        {
            site.RunAsSystem(s => action(s.OpenWeb(url)));
        }
 
        public static void RunAsSystem(this SPWeb web, Action<SPWeb> action)
        {
            web.Site.RunAsSystem(web.ID, action);
        }
 
        public static T SelectAsSystem<T>(this SPSite site, Guid webId, Func<SPWeb, T> selector)
        {
            return site.SelectAsSystem(s => selector(s.OpenWeb(webId)));
        }
 
        public static T SelectAsSystem<T>(this SPSite site, string url, Func<SPWeb, T> selector)
        {
            return site.SelectAsSystem(s => selector(s.OpenWeb(url)));
        }
 
        public static T SelectAsSystem<T>(this SPWeb web, Func<SPWeb, T> selector)
        {
            return web.Site.SelectAsSystem(web.ID, selector);
        }
 
    }

Introduction

In Part 1 we created a Silverlight control that enabled us to add pushpins to a Bing Map using JavaScript. In this part we’ll use that ability to create a web part that will connect to a SharePoint list of location data and render that data as pushpins on the map.

The SharePoint Web Part

I developed the Silverlight control on my local machine so will be switching to a VM with SharePoint and the VSeWSS 1.3 extensions installed for this part. Open up Visual Studio and create a new SharePoint Web Part project. I’ve called mine BingMapsWebPart because it’s late at night and I can’t think of anything else. I’ve also renamed the web part class to BingMap.

To render the Silverlight control within the web part we need to emit the same markup that we had in the Test Page created for us in Part 1. for cleanliness I’ve stored most of the HTML tags, attribute strings and their values in a Constants class (you don’t have to of course). The Constants class has been added to the project and looks like the following: –

public class Constants
{
 
    public const string Id = "id";
    public const string Data = "data";
    public const string Type = "type";
    public const string Name = "name";
    public const string None = "none";
    public const string Value = "value";
    public const string Width = "width";
    public const string Height = "height";
    public const string Border = "border";
    public const string Hidden = "hidden";
    public const string Visibility = "visibility";
 
    public const string HtmlDiv = "div";
    public const string HtmlObject = "object";
    public const string HtmlParam = "param";
    public const string HtmlIFrame = "iframe";
 
    public const string DivId = "silverlightControlHost";
 
    public const string ObjectData = "data:application/x-silverlight-2,";
    public const string ObjectType = "application/x-silverlight-2";
 
    public const string IFrameId = "_sl_historyFrame";
 
    public const string ParamSourceName = "source";
    public const string ParamSourceValue = "_layouts/BingMaps/BingMapWebPart.xap";
    public const string ParamOnErrorName = "onError";
    public const string ParamOnErrorValue = "onSilverlightError";
    public const string ParamOnLoadName = "onLoad";
    public const string ParamOnLoadValue = "onSilverlightLoad";
    public const string ParamBackgroundName = "background";
    public const string ParamBackgroundValue = "white";
    public const string ParamWindowlessName = "windowless";
    public const string ParamWindowlessValue = "true";
    public const string ParamMinRuntimeName = "minRuntimeVersion";
    public const string ParamMinRuntimeValue = "3.0.40624.0";
    public const string ParamAutoUpgradeName = "autoUpgrade";
    public const string ParamAutoUpgradeValue = "true";
 
    public const string GetSilverlightLink = "http://go.microsoft.com/fwlink/?LinkID=149156&v=3.0.40624.0";
    public const string GetSilverlightImage = "http://go.microsoft.com/fwlink/?LinkId=108181";
    public const string GetSilverlightAltText = "Get Microsoft Silverlight";
 
    public const string CssTextDecoration = "text-decoration";
    public const string CssBorderStyle = "border-style";
 
    public const string ScriptKeySilverlight = "silverlight_js";
    public const string ScriptKeySilverlightOnLoad = "silverlightOnLoad_js";
 
    public const string SilverlightScriptSource = "_layouts/BingMaps/Silverlight.js";
 
    public const string HtmlKeyMapLocation = "MapLocation";
 
}

The only two you really need to care about in here are ParamSourceValue and SilverlightScriptSource. The former is the relative path to the Silverlight .xap file we created in Part 1 while the latter is the path to the standard Silverlight.js file. In this example I’ve copied both files to the LAYOUTS folder under the 12 hive on the SharePoint box. If you planning to put the files somewhere else be sure to update the values in the Constants class.

Back in our BingMap class set up some private and public properties: –

private Panel _controlHost;
private string _latitudeColumn = "Latitude";
private string _longitudeColumn = "Longitude";
private string _titleColumn = "LinkTitle";
private IWebPartTable _provider;
private ICollection _tableData;
 
[WebDisplayName("Latitude Column"),
 WebBrowsable(true),
 Personalizable(PersonalizationScope.Shared),
 WebDescription("The column from the list that stores the Latitude."),
 Category("Map Settings")]
public string LatitudeColumn
{
    get { return _latitudeColumn; }
    set { _latitudeColumn = value; }
}
 
[WebDisplayName("Longitude Column"),
 WebBrowsable(true),
 Personalizable(PersonalizationScope.Shared),
 WebDescription("The column from the list that stores the Longitude."),
 Category("Map Settings")]
public string LongitudeColumn
{
    get { return _longitudeColumn; }
    set { _longitudeColumn = value; }
}
 
[WebDisplayName("Title Column"),
 WebBrowsable(true),
 Personalizable(PersonalizationScope.Shared),
 WebDescription("The column from the list that stores the title for the information window."),
 Category("Map Settings")]
public string TitleColumn
{
    get { return _titleColumn; }
    set { _titleColumn = value; }
}

Above we have private properties for an ASP panel control (we’ll have this run at the server side to enable us to check if the control has already been created), three private properties and their public accessors for setting the columns in the SharePoint list that we want to use for Latitude, Longitude and Title information and two private properties (_provider and _tableData) to allow us to consume the data from the SharePoint list.

The next stage is to create the markup for the Silverlight control. We’ll do this inside a CreateMapControl method as follows: –

private void CreateMapControl()
{
    if (_controlHost == null)
    {
        if (!this.Page.ClientScript.IsClientScriptIncludeRegistered(Constants.ScriptKeySilverlight))
        {
            this.Page.ClientScript.RegisterClientScriptInclude(this.GetType(), Constants.ScriptKeySilverlight, Constants.SilverlightScriptSource);
        }
        _controlHost = new Panel();
        _controlHost.ID = Constants.DivId;
        HtmlGenericControl obj = new HtmlGenericControl(Constants.HtmlObject);
        obj.Attributes.Add(Constants.Data, Constants.ObjectData);
        obj.Attributes.Add(Constants.Type, Constants.ObjectType);
        obj.Attributes.Add(Constants.Width, Unit.Percentage(100).ToString());
        obj.Attributes.Add(Constants.Height, Unit.Percentage(100).ToString());
        HtmlGenericControl paramSource = new HtmlGenericControl(Constants.HtmlParam);
        paramSource.Attributes.Add(Constants.Name, Constants.ParamSourceName);
        paramSource.Attributes.Add(Constants.Value, Constants.ParamSourceValue);
        HtmlGenericControl paramOnError = new HtmlGenericControl(Constants.HtmlParam);
        paramOnError.Attributes.Add(Constants.Name, Constants.ParamOnErrorName);
        paramOnError.Attributes.Add(Constants.Value, Constants.ParamOnErrorValue);
        HtmlGenericControl paramOnLoad = new HtmlGenericControl(Constants.HtmlParam);
        paramOnLoad.Attributes.Add(Constants.Name, Constants.ParamOnLoadName);
        paramOnLoad.Attributes.Add(Constants.Value, Constants.ParamOnLoadValue);
        HtmlGenericControl paramBackground = new HtmlGenericControl(Constants.HtmlParam);
        paramBackground.Attributes.Add(Constants.Name, Constants.ParamBackgroundName);
        paramBackground.Attributes.Add(Constants.Value, Constants.ParamBackgroundValue);
        HtmlGenericControl paramWindowless = new HtmlGenericControl(Constants.HtmlParam);
        paramWindowless.Attributes.Add(Constants.Name, Constants.ParamWindowlessName);
        paramWindowless.Attributes.Add(Constants.Value, Constants.ParamWindowlessValue);
        HtmlGenericControl paramMinRuntime = new HtmlGenericControl(Constants.HtmlParam);
        paramMinRuntime.Attributes.Add(Constants.Name, Constants.ParamMinRuntimeName);
        paramMinRuntime.Attributes.Add(Constants.Value, Constants.ParamMinRuntimeValue);
        HtmlGenericControl paramAutoUpgrade = new HtmlGenericControl(Constants.HtmlParam);
        paramAutoUpgrade.Attributes.Add(Constants.Name, Constants.ParamAutoUpgradeName);
        paramAutoUpgrade.Attributes.Add(Constants.Value, Constants.ParamAutoUpgradeValue);
        HtmlAnchor a = new HtmlAnchor();
        a.HRef = Constants.GetSilverlightLink;
        a.Style.Add(Constants.CssTextDecoration, Constants.None);
        HtmlImage img = new HtmlImage();
        img.Src = Constants.GetSilverlightImage;
        img.Alt = Constants.GetSilverlightAltText;
        img.Style.Add(Constants.CssBorderStyle, Constants.None);
        HtmlGenericControl iframe = new HtmlGenericControl(Constants.HtmlIFrame);
        iframe.Attributes.Add(Constants.Id, Constants.IFrameId);
        iframe.Style.Add(Constants.Visibility, Constants.Hidden);
        iframe.Style.Add(Constants.Height, Unit.Pixel(0).ToString());
        iframe.Style.Add(Constants.Width, Unit.Pixel(0).ToString());
        iframe.Style.Add(Constants.Border, Unit.Pixel(0).ToString());
        a.Controls.Add(img);
        obj.Controls.Add(paramSource);
        obj.Controls.Add(paramOnLoad);
        obj.Controls.Add(paramBackground);
        obj.Controls.Add(paramWindowless);
        obj.Controls.Add(paramMinRuntime);
        obj.Controls.Add(paramAutoUpgrade);
        obj.Controls.Add(a);
        _controlHost.Controls.Add(obj);
        _controlHost.Controls.Add(iframe);
        this.Controls.Add(_controlHost);
    }
}

Now that’s a fair amount of code, but essentially all it does is build a control tree that has the same markup that we had in our Test Page. Now we need to craft our JavaScript function that we want to call when the Silverlight control has loaded: –

private void RegisterSilverlightOnLoadFunction()
{
    try
    {
        CreateMapControl();
        if (_tableData != null)
        {
            if (!this.Page.ClientScript.IsClientScriptBlockRegistered(Constants.ScriptKeySilverlightOnLoad))
            {
                StringBuilder sb = new StringBuilder();
                sb.Append("function ");
                sb.Append(Constants.ParamOnLoadValue);
                sb.Append("(sender, args) {");
                sb.Append("\r\n\t");
                sb.Append("var bingMapsControl = sender.getHost();");
                sb.Append("\r\n\t");
                foreach (DataRowView rowView in _tableData)
                {
                    string title = rowView.Row[this._titleColumn].ToString();
                    double latitude = double.Parse(rowView.Row[_latitudeColumn].ToString());
                    double longitude = double.Parse(rowView.Row[_longitudeColumn].ToString());
                    sb.Append("var l = bingMapsControl.content.services.createObject('");
                    sb.Append(Constants.HtmlKeyMapLocation);
                    sb.Append("');");
                    sb.Append("\r\n\t");
                    sb.Append("l.Title = '");
                    sb.Append(title);
                    sb.Append("';");
                    sb.Append("\r\n\t");
                    sb.Append("l.Latitude = ");
                    sb.Append(latitude);
                    sb.Append(";");
                    sb.Append("\r\n\t");
                    sb.Append("l.Longitude = ");
                    sb.Append(longitude);
                    sb.Append(";");
                    sb.Append("\r\n\t");
                    sb.Append("bingMapsControl.content.Communicator.AddLocation(l);");
                    sb.Append("\r\n\t");
                }
                sb.Append("\r\n");
                sb.Append("}");
                sb.Append("\r\n");
                this.Page.ClientScript.RegisterClientScriptBlock(this.GetType(), Constants.ScriptKeySilverlightOnLoad, sb.ToString(), true);
            }
        }
    }
    catch
    { }
}

This will emit the same JavaScript that we had in our test page. The major difference here is that we’re looping through each row in our SharePoint list and calling the AddLocation() method. In the overridden CreateChildControls() method add a call to our CreateMapControl function: –

protected override void CreateChildControls()
{
    base.CreateChildControls();
    CreateMapControl();
}

The only thing left to code now is the web part connection so add the following two methods to the web part: –

[ConnectionConsumer("Location Map Data")]
public void GetConnectionInterface(IWebPartTable provider)
{
    TableCallback callback = new TableCallback(ReceiveTable);
    _provider = provider;
    provider.GetTableData(callback);
}
 
public void ReceiveTable(object providerTable)
{
    _tableData = providerTable as ICollection;
    RegisterSilverlightOnLoadFunction();
}

Because the data connection works asynchronously our GetConnectionInterface method defines a callback method that will be executed once the data has been returned from the list. Once we have the data the callback function, ReceiveTable, can write the required JavaScript.

Now that’s all we need to do to get the web part to function. This version (for brevity’s sake) has zero error handling so before you put this thing anywhere near a non-development SharePoint machine make sure that you include some exception handling.

If you have the URL of a SharePoint site in the “Start browser with URL” field in Project Properties > Debug then you can hopefully just right-click the project in Visual Studio and select Deploy. Again, if you’re going to a production environment you’ll want to package the web part code (plus the .xap and .js files from the Silverlight project) as a .wsp file.

What we need now is a SharePoint site (hopefully you’ve already got one of these lying around). If you’ve already got a list with location data then great. If not, then just create a new custom list with columns for Latitude and Longitude (both numbers).

Once you’ve got your Locations list add your web part and a list view web part to a page (new or otherwise). You’ll need to set the height of the Bing Map web part in pixels to get it to show. What you should have now is a blank map of the world plus your empty Locations list. Go ahead and add some data into your list.

Bing Map Web Part Connections

Once you’ve done that, change the settings on the Bing Map web part by clicking the menu arrow at the top right hand corner of the control and selecting Modify Shared Web Part. Once the page reloads in edit mode, check that the column names we’ve defined in the map web part match those in the locations list. In the right-hand panel in edit mode expand the Map Settings section and ensure the values in the Latitude, Longitude and Title Columns fields are valid column names in your list. Then, click the edit menu arrow at the top right of the control and select Connections > Get Location Map Data From > Locations (or whatever your list happens to be called). The page will reload again and this time you should see pushpins on the map at the latitude and longitudes specified in your list. Click Exit Edit Mode in the top right to view the page.

The final result should resemble something like the following (I’ve zoomed in so that we can distinguish between the Glasgow and Edinburgh pushpins.

Bing Map Web Part