SSL Certificate Jungle – Multiple Domains, Wildcards, and Both

by Oliver 10. September 2014 22:05

Recently, we've decided to add https:// support to Camping.Info. Since we've been running our application servers behind an NGINX reverse proxy for a while now, the natural choice in our setup was to terminate the secured connections at the NGINX server which has CPU usage values somewhere between 1% and 5%. This is also call SSL offloading and will allow us to keep all the SSL setup and potential runtime overhead off of our application servers.

Certificate Options

On Camping.Info, we serve almost all static content from the domain Since we want to secure the whole site, we need to have valid ssl certificates for and all subdomains of

Using Separate Certificates

The first solution to the problem would be using one SSL certificate for and one (wildcard) certificate for * that would secure all subdomains of If we wanted to secure itself as well, we would need a third certificate just for that one domain, because wildcard certificates do not cover the parent domain without a subdomain.

Using a SAN (or UCC) Certificate

Subject Alternative Names (SAN) can help in this situation. The subjectAltName field of an SSL certificate can contain many domain names that will be secured by that certificate. In our scenario we could have put,,,, and the other over 25 subdomains in there. Unfortunately, that would complicate things for the use of new subdomains in the future which would be missing from the list. A wildcard certificate really seems like the natural choice when you have more than 5 subdomains to secure or are expecting to have more of them in the near future.

Using a Wildcard Certificate with SANs

It turns out that wildcard certificates can well be combined with the usage of the subjectAltName field. Most CAs make you pay quite a lot for this combination, but we've also found a quite affordable offer on

Multiple SSL Certificates on a Single NGINX Instance – Beware

Choosing the first certificate option, i.e. using at least two certificates we now need to install both of them on our NGINX reverse proxy server.

This blog post on how to install multiple SSL certificates on NGINX is a very good read – but be sure to read the comments as well. It turns out that the Server Name Indication (SNI) extension to the TLS protocol that allows you to do so will lock out clients that don't support SNI. The most prominent example of such a client is any version of Internet Explorer running on Windows XP, and even though Microsoft has ended support of XP almost half a year ago, we're still seeing 11% of our Windows users running XP accounting for 6% of our total traffic – a number we cannot ignore.

Wanting to use separate SSL certificates on one NGINX instance we would need two different IP addresses pointing to that same server so that each certificate could respond to requests on one of those addresses. This would both complicate our setup and incur higher monthly infrastructural costs that we'd gladly avoid.

Installing a Single SSL Certificate on NGINX

The option we finally chose is to use a wildcard SAN certificate where we'd enter, and * as the different subject alternative names. Installing that into NGINX is straight-forward if you know how.

Happy SSL'ing!

Turn Off / Remove Ads in Skype's Chat Window On a Windows OS

by Oliver 19. August 2014 21:37

I've got quite annoyed by seeing the same ad in my Skype chat window and today I simply had enough of it. Going to Google lead me to this Youtube video which shows how to block ads in Skype by denying access to in Internet Explorer. Here's a screen shot, just follow the steps below and you're done:

  1. Open Internet Options
  2. Switch to the Security tab
  3. Select Restricted Sites
  4. Click the Sites button
  5. Type into the text box
  6. Click Add
  7. Click Close
  8. Click OK

For those changes to take effect in Skype you need to leave your chat window e.g. by clicking on your profile in the upper left corner. Now, when you open a chat window no ads will show.


There are other ways to stop Skype from showing ads, and here's a really thorough post on that topic.

Happy Skype'ing!

Pieces of C# – long and short

by Oliver 9. August 2014 12:52

Today, I found this dusty piece of code in our code base:

Stone-age version
  1. public string GetIframeIds()
  2. {
  3.     var result = new StringBuilder();
  4.     var first = true;
  5.     foreach (var iframe in _iframes)
  6.     {
  7.         if (!first) result.Append(',');
  8.         else first = false;
  9.         result.Append("'" + iframe.ClientID + "'");
  10.     }
  11.     return result.ToString();
  12. }

… and just had to rewrite it to this:

Updated version
  1. public string GetIframeIds()
  2. {
  3.     return string.Join(",", _iframes.Select(ifr => "'" + ifr.ClientID + "'"));
  4. }

I couldn't bear but run some micro-performance test on these code snippets since StringBuilder is usually quite fast. I ran each of the snippets with an _iframes length of 30 in a loop of 10.000 iterations and yes, the first version is faster with 215ms vs. 360ms. But then, in production I run that code block only once per request, not 10.000 times as in the test. Spending 21ns or 36ns in that method won't make any significant difference, especially when looking at request execution times of beyond 100ms.

Why should you or I care?

The second code block is arguably easier to read, quicker to write, and harder to get wrong.

Happy coding!

Remove The Padding From The Google Map API v3 fitBounds() Method

by Oliver 4. July 2014 21:56

In our customizable web portal platform discoverize we offer searching for results using a Google Map. In a recent iteration, we were trying to improve the overall map usage experience, and one thing we wanted to do was to zoom into the map as far as possible for all results to still appear on the map.

map.fitBounds(map.getBounds()); – does not do what you would like it to

The natural choice to achieve that would be to use the fitBounds method on the Map object with the bounding rectangle for the coordinates of all results. Unfortunately, though, Google chose to add a non-configurable 45px margin around that bounding box so that in a lot of cases the map appears to be zoomed out too far for what would be possible. That's why map.fitBounds(map.getBounds()); will zoom the map out!

Zoom in if you can

After a bit of searching I found a workaround on this Google Groups thread: map.fitBounds(map.getBounds()) zooms out map. The idea behind the solution provided there is to check whether the bounds to fit on the map wouldn't still fit using a higher zoom level and if yes, apply that zoom level. Since I had some problems with the code from the thread I reworked it slightly and now have this:

function myFitBounds(myMap, bounds) {
    myMap.fitBounds(bounds); // calling fitBounds() here to center the map for the bounds

    var overlayHelper = new google.maps.OverlayView();
    overlayHelper.draw = function () {
        if (!this.ready) {
            var extraZoom = getExtraZoom(this.getProjection(), bounds, myMap.getBounds());
            if (extraZoom > 0) {
                myMap.setZoom(myMap.getZoom() + extraZoom);
            this.ready = true;
            google.maps.event.trigger(this, 'ready');

function getExtraZoom(projection, expectedBounds, actualBounds) {

    // in: LatLngBounds bounds -> out: height and width as a Point
    function getSizeInPixels(bounds) {
        var sw = projection.fromLatLngToContainerPixel(bounds.getSouthWest());
        var ne = projection.fromLatLngToContainerPixel(bounds.getNorthEast());
        return new google.maps.Point(Math.abs(sw.y - ne.y), Math.abs(sw.x - ne.x));

    var expectedSize = getSizeInPixels(expectedBounds),
        actualSize = getSizeInPixels(actualBounds);

    if (Math.floor(expectedSize.x) == 0 || Math.floor(expectedSize.y) == 0) {
        return 0;

    var qx = actualSize.x / expectedSize.x;
    var qy = actualSize.y / expectedSize.y;
    var min = Math.min(qx, qy);

    if (min < 1) {
        return 0;

    return Math.floor(Math.log(min) / Math.LN2 /* = log2(min) */);

Replace map.fitBounds(bounds) with myFitBounds(map, bounds)

That's all you have to do to zoom in as far as possible while keeping your bounds on the map.

Happy coding!

Learning Powershell

by Oliver 30. May 2014 21:34

Today, I finally decided that I want to get to grips with PowerShell and have it available in my toolbox for those everyday developer tasks. For a fresh start, I wanted to make sure I'm running the latest and greatest of PowerShell, but how do I find out which version I have installed?

What version am I running?

Just fire up a PowerShell instance and type $psversiontable or $host.version:

PS C:\Windows\system32> $psversiontable

Name                           Value
----                           -----
PSVersion                      4.0
WSManStackVersion              3.0
CLRVersion                     4.0.30319.18444
BuildVersion                   6.3.9600.16406
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0}
PSRemotingProtocolVersion      2.2

PS C:\Windows\system32> $host.version

Major  Minor  Build  Revision
-----  -----  -----  --------
4      0      -1     -1

Actually, when I ran this I didn't have the 4.0 version installed yet. So where did I get it?

How to install Powershell 4.0 (the newest version as of mid 2014)?

Go here and choose the right link for you: How to Install Windows PowerShell 4.0. That's it.

Make use of great tooling: use the ISE

Last but not least, especially for those of you who like me are just getting started, make sure you're using the great Integrated Scripting Environment (ISE) that comes bundled with PowerShell:


Now, get scripting!

Delete a Large Number of Rows from a Table in SQL Server

by Oliver 28. May 2014 12:09

Recently, we had to make some space available in one of our SQL Express instances that was getting close to its 10 GB limit of stored data, so I set out to delete some old data from two of our largest tables. One contained about half a million rows, the other a bit over 21 million.

Simple Deletion Would Take… Forever

The simplest sql statement to delete all rows that were created before 2012 would be the following:

  1. DELETE FROM [dbo].[Message] WHERE DateCreated < '20120101'

I can't even tell you how long this took because at 14 minutes I just cancelled the query execution (which took another 7 minutes to finish). This was the table with less than 500,000 rows where we wanted to delete a bit more than 200,000 rows.

Break Delete Operation Into Chunks

Searching for a solution to the problem, I came across this blog post on breaking large delete operations into chunks. It shows in good detail how the simple version above behaves against running a loop of a few tens of thousand deletes per iteration. An interesting aspect I hadn't thought of at that point was the transaction log growth that can become a problem with large delete operations. Running a loop allows you to do a log backup (in full recovery mode) or a checkpoint (in simple mode) at the end of each iteration so that the log will grow much more slowly.

Unfortunately, though, this didn't help with the execution time of the delete itself, as you can also see from the graphs presented in above post.

Disable Those Indexes!

It turns out, our [Message] table had six non-clustered indexes on them which all had to be written to for every row that was deleted. Even if those operations are fast, their processing time will add up over a few hundred thousand iterations. So let's turn them off! In fact, let's turn only those off that won't be used during out delete query. [We have one index on the DateCreated column which will be helpful during execution.]

This stackoverflow answer shows how to create some dynamic SQL to disable all non-clustered indexex in a database. I've modified it slightly to disable only indexes of a given table:

Disable/Enable Table Indexes
  1. DECLARE @table AS VARCHAR(MAX) = 'Message';
  2. DECLARE @sqlDisable AS VARCHAR(MAX) = '';
  3. DECLARE @sqlEnable AS VARCHAR(MAX) = '';
  6.     @sqlDisable = @sqlDisable + 'ALTER INDEX ' + + ' ON '
  7.                     + + ' DISABLE;' + CHAR(13) + CHAR(10),
  8.     @sqlEnable = @sqlEnable + 'ALTER INDEX ' + + ' ON '
  9.                     + + ' REBUILD;' + CHAR(13) + CHAR(10)
  10. FROM sys.indexes idx
  11. JOIN sys.objects obj
  12.     ON idx.object_id = obj.object_id
  13. WHERE idx.type_desc = 'NONCLUSTERED'
  14.     AND obj.type_desc = 'USER_TABLE'
  15.     AND = @table;
  17. RAISERROR(@sqlDisable, 0, 1) WITH NOWAIT;
  18. RAISERROR(@sqlEnable, 0, 1) WITH NOWAIT;
  19. --EXEC(@sqlDisable);
  20. --EXEC(@sqlEnable);

Now, with those indexes disabled, the simple delete operation took a lot less than a minute! Even in the case of our 21 million rows table, deleting 7 million rows took only 1:02 on my machine. Of course, after deleting the unwanted rows, you need to re-enable the indexes again which took another minute, but all in all I'm happy with the result.

Copy Data to New Table and Drop Old Table

One other way of deleting rows that I've used in combination with changing the table schema at the same time is the following:

  • use a temporary table into which you copy all the rows you want to keep (the schema of which I modified to meet our new needs)
  • delete the original table
  • rename the temporary table to the original table's name
  • recreate all indexes you had defined before

This is basically what SSMS generates for you when you change the schema of a table, except for the indexes – you have to recreate them yourself.

As you can imagine, this approach becomes faster and creates smaller transaction log footprint with a growing amount of data to delete. It won't have any benefit if you delete less than half of the table's rows.

Choose the right tool for the job

There are quite a few other approaches and tips out there on how to speed up your deletion process. It depends a lot on your concrete situation which of those will actually help you get your deletion job done faster. I had to experiment quite a bit to find the sweet spot but now that I've seen a few approaches I'm able to take a better decision in the future.

Encrypting Passwords and Keys in web.config

by Anton 19. April 2014 19:18

We wanted to encrypt our passwords which we store in the web.config of our Webapplication. Most of the WorldWideweb pointed to the use of aspnet_regiis.exe: We want to use the encrypted web.config on a few machines, so we need to import the decryption keys on those machines.

I pretty much used the walkthrough provided by Microsoft.

  1. Ceate a custom RSA key container: aspnet_regiis -pc "CampingInfo" –exp
  2. Grant the application access to the keys: aspnet_regiis -pa "CampingInfo" "NT AUTHORITY\NETWORK SERVICE". The ASP.NET identity can be found via creating and calling a page “Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name);”
  3. Add a configuration provider to the web.config:
  4. <configuration>
    <add name="CampingInfoProvider"
    useMachineContainer="true" />

  5. Put the to be encrypted settings in a custom section in the web.config:
    <section name="secureAppSettings" type="System.Configuration.NameValueSectionHandler" />
    <add key="somepassword" value="xyz" />

  6. Encrypt the custom section: aspnet_regiis -pef "secureAppSettings" "C:\<path to dirctory where web.config resides>" -prov "CampingInfo"
  7. Export the RSA key container: aspnet_regiis -px "CampingInfo" "c:\keys.xml" -pri
  8. Copy the xml file to a second server which runs the same application (with the same, now partially encrypted web.config).
  9. Import the RSA key container on the second server: aspnet_regiis -pi "CampingInfo" "c:\keys.xml"
  10. Grant the application on the second server access to the keys as in 2. (Identity may be different.)

enjoyed the post?


Retrieving random content items (rows) from a SQL database in Orchard with HQL queries

by Oliver 22. February 2014 12:37

We're adding some Premium functionality to discoverize right now, and part of that is the so-called Premium block which is a showcase of six Premium entries. Now, choosing the right entries for that block is the interesting part: as long as we don't have six Premium entries to show, we want to fill up the left over space with some random entries that haven't booked our Premium feature, yet.

Get random rows from SQL database

There are plenty of articles and stackoverflow discussions on the topic of how to (quickly) retrieve some random rows from a SQL database. I wanted to get something to work simply and quickly, not necessarily high performance. Incorporating any kind of hand-crafted SQL query was really the last option since it would mean to get hold of an ISessionLocator instance to get at the underlying NHibernate ISession to then create a custom SQL query and execute it. Not my favorite path, really. Luckily, the IContentManager interface contains the method HqlQuery which returns an IHqlQuery containing these interesting details:

/// <summary>
/// Adds a join to a specific relationship.
/// </summary>
/// <param name="alias">An expression pointing to the joined relationship.</param>
/// <param name="order">An order expression.</param>
IHqlQuery OrderBy(Action<IAliasFactory> alias, Action<IHqlSortFactory> order);

…and IHqlSortFactory contains a Random() method. This finally got me going!

HQL queries in Orchard

HQL queries are a great feature in (N)Hibernate that allow you to write almost-SQL queries against your domain models. I won't go into further detail here, but be sure to digest that!

Orchard's IContentManager interface contains the method HqlQuery() to generate a new HQL query. Unfortunately, there's almost no usage of this feature throughout the whole Orchard solution. So let me document here how I used the HqlQuery to retrieve some random entries from our DB:

// retrieve count items of type "Entry" sorted randomly
return contentManager.HqlQuery()
    .OrderBy(alias => alias.ContentItem(), sort => sort.Random())
    .Slice(0, count)
    .Select(item => item.Id);

And one more:

// retrieve <count> older items filtered by some restrictions, sorted randomly
return contentManager.HqlQuery()
    .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),
           expr => expr.Eq("Active", true))
    .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),
           expr => expr.Lt("BookingDateTime", recentDateTime))
    .OrderBy(alias => alias.ContentItem(), sort => sort.Random())
    .Slice(0, count)
    .Select(item => item.Id);

Even with the source code at hand, thanks to Orchard's MIT license, the implementation of this API in the over 600 lines long DefaultHqlQuery is not always straight-forward to put into practice. Most of all I was missing a unit test suite that would show off some of the core features of this API and I'm honestly scratching my head of how someone could build such an API without unit tests!

Random() uses newid() : monitor the query performance

The above solution was easy enough to implement once I've got my head around Orchard's HQL query API. But be aware that this method uses the newid() approach (more here) and thus needs to a) generate a new id for each row in the given table and b) sort all of those ids to then retrieve the top N rows. Orchard has this detail neatly abstracted away in the ISqlStatementProvider implementation classes. Here's the relevant code from SqlServerStatementProvider (identical code is used for SqlCe):

public string GetStatement(string command) {
    switch (command) {
        case "random":
            return "newid()";
    return null;

For completeness, here's the generated SQL from the first query above (with variable names shortened for better readability):

select content.Id as col_0_0_
from Test_ContentItemVersionRecord content
    inner join Test_ContentItemRecord itemRec
        on content.ContentItemRecord_id = itemRec.Id
    inner join Test_ContentTypeRecord typeRec
        on itemRec.ContentType_id = typeRec.Id
where ( typeRec.Name in ('Entry') )
    and content.Published = 1 order by newid()

This approach works well enough on small data sets but may become a problem if your data grows. So please keep a constant eye on all your random queries' performance.

Happy HQL-ing!

GIT tip: fast-forward local branch to the head of its remote tracking branch without checking it out

by Oliver 6. February 2014 00:23

Not much else to say than what's mentioned in the title. I come across the need to do so mostly before deployments from my machine where I want to update my local master branch to the HEAD of the remote master branch. Here's how to do that:

   1: git fetch origin master:master

Thank you stackoverflow and Cupcake!

Productivity boost with MSBuild: use /maxcpucount

by Oliver 28. January 2014 21:24

This is embarrassing. For the n-th time during the past couple of years I've felt an unease waiting for our projects (read: solutions) to compile. I kept seeing this:


This is MSBuild using 1 (!), yes, one!, of the 8 CPU cores I've sitting in my machine to get my work done. What about the other 7? Why don't you use them, MSBuild? With that single core, currently my simple local build of our project discoverize takes around 36 seconds:


Tell MSBuild to use all cpu cores

Well, it's as easy as adding /m or /maxcpucount to your msbuild command line build to boost your build times:

image image

Down to 8 seconds with 3 additional characters: [space]/m. That's easily a 4.5 times improvement!

Your mileage may vary

Of course, every project is different, so your speed increase might be higher or a lot lower than what I've seen. But it's an easy measure to get at least some improvement in build times with very little effort. Don't trust Visual Studio on that one, though – the solution builds slowly there, still.

For reference, let me tell you, that the /maxcpucount switch can actually take a parameter value like so: /maxcpucount:4. So if you lots of other stuff going on in the background or I don't know for what reason, really, you can limit the number of cpus used by MSBuild.

Props to the Orchard team for a highly parallelizable build

One of the specifics of the Orchard source code that's the base for discoverize is the very loose coupling between the 70+ projects in the solution. This allows MSBuild to distribute the compilation work to a high number of threads because there are almost no dependencies between the projects that MSBuild would have to respect. Great work!

Happy building!

About Oliver code blog logo I build web applications using ASP.NET and have a passion for jQuery. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our daughter.

About Anton code blog logo I'm a software developer at teamaton. I code in c# and work with mvc, orchard, specflow, coypu and nhibernate. I enjoy beach volleyball, board games and Coke.