Bundling Angular for production in Visual Studio 2015

There are many ways of bundling Angular SPA. One of the ways I followed is Code Project by  This article recommends gulp task script and SystemJs-builder.

One thing I couldn’t make it work even I followed this article was bundling angular app for a production environment. This is because I started my Angular app following the Angular official tutorial so that app is slightly different from what’s in the Code Project and probably I used different versions of dependency.

I had a few issues when I couldn’t make a production ready bundle.

  1. As I kept using SysntemJs for production, the Angular app had to read many javascript files while loading the page, which caused slow page loading.
  2. Browser cache prevented users from loading updated new javascript of the Angular app.

The solution

Use SystemJs-Builder. The great thing about using systemjs builder for a production environment is that it helps to bundle all Angular dependency files and my Angular into a single minified javascript file. This can help my 2 issues. It’s like magic.

Actually, I was already using systemjs builder as suggested in the Code Project article to bundle Angular dependency files into a single angular4-dev-min.js. However, it wasn’t working for the buildForProduction stage, which uses a buildStatic function.

Main error I experienced was “Error on fetch for app.module” and “not found app.XXX.html”, which is templates of my Angular app. Clearly, it was related to path configuration. As my app was working well when loaded by systemjs, it looks to me a bug of systemjs.

The first error resolved by the workaround in the the bug comment. as below.

Use a .js extension for all my Angular app files to import. As fetch error keep occurred other js files, I had to add .js for all import in all Angular .ts files.

import { AppModule } from './app.module.js';
import { AppRoutingModule, routableComponents } from './app-routing.module.js';

The second error seems only happened to me.  The solution was adding a path(/App) in @component section. The original Code Project used ‘./App‘, which didn’t work for me, because it added extra ‘/App’ path like ‘/App/App’ when running the app in debug mode.

 moduleId: module.id,
 selector: 'my-app',
 templateUrl: '/App/app.component.html,
 styleUrls: ['/App/app.component.css']

Just to share what dependency I used, I show the part of package.json below

"@angular/common": "^4.0.0",
"systemjs": "0.20.18",
"systemjs-builder": "^0.16.6",

Just one issue with the solution above is that absolute path for styleUrls doesn’t work with Angular2 or 4. This is an Angular bug and workaround is putting CSS in HTML file keeping the URL as is for now.

Recursive mocking using Moq


When we want to mock a method that’s inside an interface which is inside another interface. This can go tricky.

I’ll have a look at easy way mocking recursively using Moq.


Moq mocking library ( Nuget package)


There is a class having a dependency of IFoo interface, and IFoo contains IBar interface as a property and IBar have a method, which is what I want to mock.

Let’s think about a class like below.

 public class MoqTest
 private IFoo _foo;

 public MoqTest(IFoo foo)
   this._foo = foo;

 public string Run(string id, string pass)
   var result = _foo.Bar.DoBar(id, pass);

   // do something with result

   return result;

 public string RunSimple()
   var result = _foo.DoFoo();
   // do something with result

   return result;


MoqTest class have a dependency of IFoo. IFoo has a Bar class and Bar class have a DoBar method. We want to mock Foo, Bar, and Dobar.

What I need to test is the logic inside Run method of MoqTest class, so I need to mock Foo, Bar and Dobar.

Now, quickly have a look at IFoo and IBar

public interface IFoo
  IBar Bar { get; }
  string DoFoo();

public interface IBar
  string DoBar(string id, string pass);

Now Foo class and Bar class looks like below.

public class Foo : IFoo
  private IBar _bar;

  public IBar Bar => _bar ?? (_bar = new Bar());

  public string DoFoo()
   return "Foo done";

public class Bar : IBar
   public string DoBar(string id, string pass)
     return "Bar done";

Find and install Moq Nuget package, if you haven’t installed Moq yet. I’ll skip the explanation about who to use Nuget package. There are so many guides on this already.

Now, let’s test simple one first.

using Moq;

public void Test_SimpleMock()
  var mockFoo = new Mock<IFoo>();
  var test = new MoqTest.MoqTest(mockFoo.Object);
  var expected = "Foo done";
  var actual = test.RunSimple();
  Assert.AreEqual(expected, actual);

This is straightforward mocking with Moq, if you know to use Moq.

See: Moq Quick Start

Next, mock recursively to mock an interface inside an interface.

I initially expected Moq can create recursive mocking when setting up like below.

var mockBar = new Mock<IBar>();

Or, create Bar mock first and Setup mockBar and then create Foo mock, then hopefully Moq understand what I want and provide recursive mocking like below.

 var mockBar = new Mock<IBar>();
 var mockFoo = new Mock<IFoo>();

My instinct didn’t work and the correct solution was as below.

public void Test_RecursiveMock()
 var mockFoo = new Mock<IFoo>() { DefaultValue = DefaultValue.Mock };
 var bar = mockFoo.Object.Bar;

 var barMock = Mock.Get(bar);
 barMock.Setup(a => a.DoBar("id", "pass")).Returns("mocked");

 var test = new MoqTest.MoqTest(mockFoo.Object);
 var expected = "mocked";
 var actual = test.Run("id", "pass");

 Assert.AreEqual(expected, actual);

A few noticeable points.

Use DefaultValue.Mock option, which will instantiate all properties and method by default. If not used, only methods defined by Setup will instantiate.

{ DefaultValue = DefaultValue.Mock }

Use Mock.Get() to recursively mock Bar class and then, Setup a method with a return value. I initially expected Moq can create recursive mocking for Bar class when

Actually, recursive mocking using Moq is well explained in Moq Quick Start.

However, if your application, in reality, is complex, mocking can be tricky. For instance, IFoo can be an authentication API or a third-party library.

All test code above can be found on GitHub

Moq is awesome!

How to migrate Word Press from a server to a PC

There are many ways to migrate Word Press from a server to another. I’ll go through the easiest way to migrate Word Press from a server to a PC.

The one possible reason to have a copy of Word Press in local machine is that it can be useful when testing new version or new plugins. When changing content, normally we don’t need to test it in PC before, we just can update in server. As Word Press provide version control of content, we can go back to a previous one if we need to.

Let’s migrate.

If you haven’t used Word Press in PC, the easiest way to host Word Press in windows PC is installing it using Bitnami installation package, which includes PHP and MySQL that are necessary to run Word Press in windows machine. There are various ways to do this, but Bitnami package seems the easiest solution.

Part 1. Install Word Press in PC

If you already installed Word Press in PC, you can skip part 1.

Download the installer from https://bitnami.com/stack/wordpress/installer. You may need to register as a free member, if you are not yet.

Run the downloaded installer.

Installation is straightforward. In most cases, just select default option.

You can change installation folder if you want.

Choose Login and Password wisely as this will be used to login Word Press and the same password will be set for MySQL’s ‘root’ credential too.

This is nice one. Installer will show you a default port that’s available. As I’m currently using 80 and 81, the installer suggested 82.

As I already using 443 for SSL, the installer suggested 444 for the SSL port of newly installing Word Press. You can choose as suggested.

Deploy WordPress to the client option is selected by default. I unchecked this option as I don’t have a plan to use it for now.

Once installed, ‘Bitnami WordPress Stack Manage Tool’ is installed, that looks like as below. This tool is easy to access the app (WordPress) and start/stop PHP/MySQL. This is very convenient as you may need to change configuration of PHP and access to MySql as you play with Word Press in the future.

Now WordPress installation is completed. Browse to localhost:82 or You can see that Word Press is installed.

We can see that WordPress is installed at

As the WordPress is not installed at root URL, but installed in “/wordpress”, a configuration is required, if your WordPress in server is running in root URL.

If you browse to, you can access to admin page by login using the id and password you chose when installing the WordPress bitnami installer.

You can see phpMyAdmin is installed in This will be used to migrate WordPress DB.

Check the database name shown below to use later.

Part 2. Migration.

1. Copy files of Word Press from the server to PC.

This can be done by copying all files using FTP, which is easy but take long time. Or you can zip up the files and unzip it later in PC after FTP the zipped file. But, to do this you need a SSH access that’s normally not provided by default due to security reason. Still, you can set up SSH connection so that login in the server using Telnet and compress the files.

Word Press in PC is installed at C:\Bitnami\wordpress-4.6.1-5\apps\wordpress\htdocs. You just need to replace all files in there with what’s in the server.

Once files are replaced with ones from the server. Word Press in PC cannot connect to the database.

We need to change a few configuration to tweak the difference between the server and PC.

Modify wp-config.php in PC to change DB name and access credential as below.

/** The name of the database for WordPress */
define('DB_NAME', 'bitnami_wordpress');

/** MySQL database username */
define('DB_USER', 'root');

/** MySQL database password */
define('DB_PASSWORD', 'YourPassword');

As the URL of the Word Press in the PC includes ‘/wordpress’, we need to change the same in wp-config.php.

define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST'] . '/wordpress'); define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST'] . '/wordpress');

With this change, Word Press in PC should work again as below.

Now, it’s time to change theme to the same one that’s being used in the server. As we copied the whole files, we the theme to select should be listed in Appearance > Themes menu in admin page. Choose the theme to use and click Activate.

If there is any recommend plugins that’s already provided with the theme, install and activate those as well in Plugins menu.


2. Migrate database

Export whole database from the server using phpMyAdmin and import it into Word Press in PC.

Goto phpMyAdmin in server and select the word press database, then go to Export menu. Just click Go button with all option as default.

Save the export database in local machine.

To import the database, let’s create new database in PC with new name.

Select the newly created database and let’s Import with all default option. It’s so easy.

Once database import is completed, change the database name in wp-config.php file again to be the one that’s newly created.

define('DB_NAME', 'bitnami_wordpress1');

You should now see fully migrated Word Press site in your PC.


Use Hangfire with SQLite in ASP.Net 4.5.

I’ll install Hangfire with SQLite, which could be quick and easy solution for small projects, which do not need full blown DB server.

Versions of each framework and libraries are as follow.

APS.Net MVC 4.5.2
Hangfire SQLite extension

Confusion selecting correct SQLite library.

When go to http://hangfire.io/extensions.html page, we can see the extension list as below.


I will use Hangfire SQLite version 1.1.1, but if you click project name, Hangfire SQLite, it goes to https://github.com/vip32/Hangfire.SQLite which is not the source code of Hangfire SQLite nuget 1.1.1. This source code in github is not working version.

Actually, https://github.com/wanlitao/HangfireExtension is the source code of nuget 1.1.1 and this is the working version. So, if you want to look into the source code, be careful about this.


Just for a purpose of demonstration, install ASP.Net MVC 4.5.2 using default project template in VS2015 and install SQLite using Nuget.


This should install System.Data.SQLite, System.Data.SQLite.EF and System.Data.SQLite.Linq


install Hangfire using Nuget.


This will install Hangfire.Core and Hangfire.SqlServer


install Hangfire SQLite extention using Nuget.


This will install Hangfire.SQLite


Do configuration and add a recurring job in Startup.cs


When configuring Hangfire to work with SQLite, use UseSQLiteStorage() and use connection string name, SQLiteHangfire, which can be anything and will declare it later in Web.config

As SQLite cannot handle concurrent request, set WorkerCount = 1.

Just for testing, I add a recurring job to print current time in VS output window in every one minute.

public void Configuration(IAppBuilder app)

 // Hangfire configuration
 var options = new SQLiteStorageOptions();
 GlobalConfiguration.Configuration.UseSQLiteStorage("SQLiteHangfire", options);
 var option = new BackgroundJobServerOptions { WorkerCount = 1};

 // Add scheduled jobs
 RecurringJob.AddOrUpdate(() => Run(), Cron.Minutely);

public void Run()
 Debug.WriteLine($"Run at {DateTime.Now}");

Add connection string as below so that SQLite can be accessed.

 <add name="SQLiteHangfire" connectionString="Data Source=E:\OneDrive\SourceCodes\Study\ASPNetHangfireSQLite\ASPNetHangfireSQLite\App_Data\Hangfire.sqlite" providerName="System.Data.SQLite.EF6" />

All done, now let’s run and browse to “/hangfire” by typing as such.


Cool, hangfire launched and looks working. One recurring job has been registered.


Jobs are running every minute.


Looking at the folder SQL database to be exist, SQLite database automatically created without any extra work, which is really nice feature.


In VS output window, we can see timestamp is written every minute, however the timestamp was not exactly every minute. I’m not sure why there is a few seconds diffrence between recurring run.



In summary, we could set up ASP.Net MVC with Hangfire and SQLite, which was straightforward. Hangfire was easy to set up. Dashboard was convenient to monitor and manage jobs. Automatically created database schema was pretty impressive.

Running this a few days, I could see a few issues.

  1. Hangfire server become 2 servers and 0 server, rather than keep 1 server.  It looks not confident about Hangfire with SQLite, but recurring job worked as expected during those times.
  2. SQLite database size keep increasing, which is concern in terms of performance once the size reached at some point. I think we may need some regular maintenance to keep the size and maintain performance.



Jenkins – Build set up for .Net project

Continuous integration and continuous deployment is the holy grail of the modern DevOps.

Jenkins is the most popular continuous integration tool out there, but it requires some other knowledge to set it up, because Jenkins does not provide full out-of-the-box features for .Net projects.

Installing Jenkins and set up to use source code control system such as Git is relatively easy.

Tip – Set up to use BitBucket

When using Github, it’s OK to use id/password as normal.
However, when it comes to Bitbucket, repository URL needs to be like https://bitbucket.org/rocker8942/marketwatchconsole.git without an username at the start of URL. I spent some time pulling out my hair for this.

Setting up build process could be a bit challenging and require lots of external tools. So, just skipping up to setting up source code set up, I’ll focus on build and release / deployment automation process in Jenkins.

It could be easier if you use TeamCity and Octopus Deploy, but if you have to go with Jenkins due to cost of whatever, the rest of the process will be helpful.

External tools as below will be used to set up Jenkins Build steps.

  • Octopack – nuget package to make the project into nuget package so that it’s easy to handle.
  • Nuget.exe
  • MSBuild.exe
  • MSTest.exe
  • Powershell Script
  • Octo.exe – Octopus Deploy console app (if you use Octopus Deploy)

It looks a lot when naming it, but most of it should be familiar to .Net developer even though it may not being used often as a console application.

There are a few steps to set this up.


#1 Nuget restore

Nuget packages don’t need to be included in source control system, nuget packages can be restored just before build process.

Just use nuget command line command.


"C:\Program Files (x86)\NuGet\nuget.exe" restore yourSolutionName.sln


#2 Build

#2-1 Just build in Jenkins

Build is done basically by MSBuild using the argument as below.

/t:Rebuild /p:Configuration=Release;RunOctoPack=true /p:OctoPackPackageVersion=1.0.${BUILD_NUMBER}

By running Octopack, the compiled files are packaged as a nuget pacakge, which is basically a zip file.

Even though we build an app with configuration pamareter ‘Release’, which create compiled library using ‘Release’ configuration. Web.config is not transformed to Release. So, we will transform it to release version later using Powershell.

#2-2 Build and publish to Octopus Deploy

or, we can publish the complied package into Octopus Deploy server if you used Octopus deploy by using the argument as below. After this, Octopus Deploy will release the package to servers using this package.

/t:Rebuild /p:Configuration=Release;RunOctoPack=true;OctoPackPublishApiKey=UseYourOwnAPIKey;OctoPackPublishPackageToHttp=http://localhost:8201/nuget/packages

It’s similar with the previous argument. Just added OctoPackPublishApiKey and OctoPackPublishPackageToHttp


#3 Run Octopus to create release

By using Octopus console tool, we can even run Octopus Deploy from Jenkins (or custom build automation script) to create release

"C:\Octopus\Tools\Octo.exe" create-release --project MarketWatchConsole --version 1.1.%BUILD_NUMBER% --packageversion 1.1.%BUILD_NUMBER% --server http://localhost:8110/ --apiKey %OctopusApiKey% --releaseNotes "Jenkins build [%BUILD_NUMBER%](http://localhost:8054/job/MarketWatchConsole/%BUILD_NUMBER%)/"


#4 Run Test

Unit test can be done by calling MSTest.exe within Jenkins.

"C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\MSTest.exe" /resultsfile:"%WORKSPACE%\AutomationTestAssistantResults.%BUILD_NUMBER%.trx" /testcontainer:"%WORKSPACE%\TestProject\TestProjectTests\bin\Release\TestProjectTests.dll" /nologo


#5 Release

To get the benefit of powerful Powershell, release script can be written by Powershell and loaded into Jeknins as a build step.

 # Demo.ps1

$destination = "D:\DemoProject"

function XmlDocTransform($xml, $xdt)
 if (!$xml -or !(Test-Path -path $xml -PathType Leaf)) {
 throw "File not found. $xml";
 if (!$xdt -or !(Test-Path -path $xdt -PathType Leaf)) {
 throw "File not found. $xdt";

$scriptPath = "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\Web"
 Add-Type -LiteralPath "$scriptPath\Microsoft.Web.XmlTransform.dll"

$xmldoc = New-Object Microsoft.Web.XmlTransform.XmlTransformableDocument;
 $xmldoc.PreserveWhitespace = $true

$transf = New-Object Microsoft.Web.XmlTransform.XmlTransformation($xdt);
 if ($transf.Apply($xmldoc) -eq $false)
 throw "Transformation failed."

# Clean up - Pre

remove-item $destination\bin\*
 remove-item $destination\Content\*
 remove-item $destination\fonts\*
 remove-item $destination\Scripts\*
 remove-item $destination\"Service References"\*
 remove-item $destination\Views\*

# unzip package 

& 'C:\Program Files (x86)\NuGet\nuget.exe' install DemoProject -Source $ENV:WORKSPACE\DemoProject\bin -OutputDirectory "D:\" -ExcludeVersion

# transform configs

XmlDocTransform $destination\Web.config $destination\Web.Release.config

# Clean up - Post

 remove-item $destination\*.$ENV:BUILD_NUMBER.zip
 remove-item $destination\Web.Debug.config
 remove-item $destination\Web.Release.config
 remove-item $destination\*.xml
 remove-item $destination\*.nuspec
 remove-item $destination\bin\*.pdb
 remove-item $destination\DemoProject.nupkg

As most of automation is done by calling console application, Jenkins itself doesn’t do much here, but  hosting and managing all these steps in a central place will be its role.

A good thing about setting up automation in Jenkins using various external console application tools is that we can set this automation up without Jenkins, because build / test / release process can be achieve by creating custom script.