SSL Certificate Jungle – Multiple Domains, Wildcards, and Both

by Oliver 10. September 2014 22:05

Recently, we've decided to add https:// support to Camping.Info. Since we've been running our application servers behind an NGINX reverse proxy for a while now, the natural choice in our setup was to terminate the secured connections at the NGINX server which has CPU usage values somewhere between 1% and 5%. This is also call SSL offloading and will allow us to keep all the SSL setup and potential runtime overhead off of our application servers. Certificate Options On Camping.Info, we serve almost all static content from the domain Since we want to secure the whole site, we need to have valid ssl certificates for and all subdomains of Using Separate Certificates The first solution to the problem would be using one SSL certificate for and one (wildcard) certificate for * that would secure all subdomains of If we wanted to secure itself as well, we would need a third certificate just for that one domain, because wildcard certificates do not cover the parent domain without a subdomain. Using a SAN (or UCC) Certificate Subject Alternative Names (SAN) can help in this situation. The subjectAltName field of an SSL certificate can contain many domain names that will be secured by that certificate. In our scenario we could have put,,,, and the other over 25 subdomains in there. Unfortunately, that would complicate things for the use of new subdomains in the future which would be missing from the list. A wildcard certificate really seems like the natural choice when you have more than 5 subdomains to secure or are expecting to have more of them in the near future. Using a Wildcard Certificate with SANs It turns out that wildcard certificates can well be combined with the usage of the subjectAltName field. Most CAs make you pay quite a lot for this combination, but we've also found a quite affordable offer on Multiple SSL Certificates on a Single NGINX Instance – Beware Choosing the first certificate option, i.e. using at least two certificates we now need to install both of them on our NGINX reverse proxy server. This blog post on how to install multiple SSL certificates on NGINX is a very good read – but be sure to read the comments as well. It turns out that the Server Name Indication (SNI) extension to the TLS protocol that allows you to do so will lock out clients that don't support SNI. The most prominent example of such a client is any version of Internet Explorer running on Windows XP, and even though Microsoft has ended support of XP almost half a year ago, we're still seeing 11% of our Windows users running XP accounting for 6% of our total traffic – a number we cannot ignore. Wanting to use separate SSL certificates on one NGINX instance we would need two different IP addresses pointing to that same server so that each certificate could respond to requests on one of those addresses. This would both complicate our setup and incur higher monthly infrastructural costs that we'd gladly avoid. Installing a Single SSL Certificate on NGINX The option we finally chose is to use a wildcard SAN certificate where we'd enter, and * as the different subject alternative names. Installing that into NGINX is straight-forward if you know how. Happy SSL'ing!

Remove The Padding From The Google Map API v3 fitBounds() Method

by Oliver 4. July 2014 21:56

In our customizable web portal platform discoverize we offer searching for results using a Google Map. In a recent iteration, we were trying to improve the overall map usage experience, and one thing we wanted to do was to zoom into the map as far as possible for all results to still appear on the map. map.fitBounds(map.getBounds()); – does not do what you would like it to The natural choice to achieve that would be to use the fitBounds method on the Map object with the bounding rectangle for the coordinates of all results. Unfortunately, though, Google chose to add a non-configurable 45px margin around that bounding box so that in a lot of cases the map appears to be zoomed out too far for what would be possible. That's why map.fitBounds(map.getBounds()); will zoom the map out! Zoom in if you can After a bit of searching I found a workaround on this Google Groups thread: map.fitBounds(map.getBounds()) zooms out map. The idea behind the solution provided there is to check whether the bounds to fit on the map wouldn't still fit using a higher zoom level and if yes, apply that zoom level. Since I had some problems with the code from the thread I reworked it slightly and now have this: function myFitBounds(myMap, bounds) { myMap.fitBounds(bounds); // calling fitBounds() here to center the map for the bounds var overlayHelper = new google.maps.OverlayView(); overlayHelper.draw = function () { if (!this.ready) { var extraZoom = getExtraZoom(this.getProjection(), bounds, myMap.getBounds()); if (extraZoom > 0) { myMap.setZoom(myMap.getZoom() + extraZoom); } this.ready = true; google.maps.event.trigger(this, 'ready'); } }; overlayHelper.setMap(myMap); } function getExtraZoom(projection, expectedBounds, actualBounds) { // in: LatLngBounds bounds -> out: height and width as a Point function getSizeInPixels(bounds) { var sw = projection.fromLatLngToContainerPixel(bounds.getSouthWest()); var ne = projection.fromLatLngToContainerPixel(bounds.getNorthEast()); return new google.maps.Point(Math.abs(sw.y - ne.y), Math.abs(sw.x - ne.x)); } var expectedSize = getSizeInPixels(expectedBounds), actualSize = getSizeInPixels(actualBounds); if (Math.floor(expectedSize.x) == 0 || Math.floor(expectedSize.y) == 0) { return 0; } var qx = actualSize.x / expectedSize.x; var qy = actualSize.y / expectedSize.y; var min = Math.min(qx, qy); if (min < 1) { return 0; } return Math.floor(Math.log(min) / Math.LN2 /* = log2(min) */); } Replace map.fitBounds(bounds) with myFitBounds(map, bounds) That's all you have to do to zoom in as far as possible while keeping your bounds on the map. Happy coding!

Delete a Large Number of Rows from a Table in SQL Server

by Oliver 28. May 2014 12:09

Recently, we had to make some space available in one of our SQL Express instances that was getting close to its 10 GB limit of stored data, so I set out to delete some old data from two of our largest tables. One contained about half a million rows, the other a bit over 21 million. Simple Deletion Would Take… Forever The simplest sql statement to delete all rows that were created before 2012 would be the following: DELETE FROM [dbo].[Message] WHERE DateCreated < '20120101' I can't even tell you how long this took because at 14 minutes I just cancelled the query execution (which took another 7 minutes to finish). This was the table with less than 500,000 rows where we wanted to delete a bit more than 200,000 rows. Break Delete Operation Into Chunks Searching for a solution to the problem, I came across this blog post on breaking large delete operations into chunks. It shows in good detail how the simple version above behaves against running a loop of a few tens of thousand deletes per iteration. An interesting aspect I hadn't thought of at that point was the transaction log growth that can become a problem with large delete operations. Running a loop allows you to do a log backup (in full recovery mode) or a checkpoint (in simple mode) at the end of each iteration so that the log will grow much more slowly. Unfortunately, though, this didn't help with the execution time of the delete itself, as you can also see from the graphs presented in above post. Disable Those Indexes! It turns out, our [Message] table had six non-clustered indexes on them which all had to be written to for every row that was deleted. Even if those operations are fast, their processing time will add up over a few hundred thousand iterations. So let's turn them off! In fact, let's turn only those off that won't be used during out delete query. [We have one index on the DateCreated column which will be helpful during execution.] This stackoverflow answer shows how to create some dynamic SQL to disable all non-clustered indexex in a database. I've modified it slightly to disable only indexes of a given table: Disable/Enable Table Indexes DECLARE @table AS VARCHAR(MAX) = 'Message'; DECLARE @sqlDisable AS VARCHAR(MAX) = ''; DECLARE @sqlEnable AS VARCHAR(MAX) = '';   SELECT     @sqlDisable = @sqlDisable + 'ALTER INDEX ' + + ' ON '                     + + ' DISABLE;' + CHAR(13) + CHAR(10),     @sqlEnable = @sqlEnable + 'ALTER INDEX ' + + ' ON '                     + + ' REBUILD;' + CHAR(13) + CHAR(10) FROM sys.indexes idx JOIN sys.objects obj     ON idx.object_id = obj.object_id WHERE idx.type_desc = 'NONCLUSTERED'     AND obj.type_desc = 'USER_TABLE'     AND = @table;   RAISERROR(@sqlDisable, 0, 1) WITH NOWAIT; RAISERROR(@sqlEnable, 0, 1) WITH NOWAIT; --EXEC(@sqlDisable); --EXEC(@sqlEnable); Now, with those indexes disabled, the simple delete operation took a lot less than a minute! Even in the case of our 21 million rows table, deleting 7 million rows took only 1:02 on my machine. Of course, after deleting the unwanted rows, you need to re-enable the indexes again which took another minute, but all in all I'm happy with the result. Copy Data to New Table and Drop Old Table One other way of deleting rows that I've used in combination with changing the table schema at the same time is the following: use a temporary table into which you copy all the rows you want to keep (the schema of which I modified to meet our new needs) delete the original table rename the temporary table to the original table's name recreate all indexes you had defined before This is basically what SSMS generates for you when you change the schema of a table, except for the indexes – you have to recreate them yourself. As you can imagine, this approach becomes faster and creates smaller transaction log footprint with a growing amount of data to delete. It won't have any benefit if you delete less than half of the table's rows. Choose the right tool for the job There are quite a few other approaches and tips out there on how to speed up your deletion process. It depends a lot on your concrete situation which of those will actually help you get your deletion job done faster. I had to experiment quite a bit to find the sweet spot but now that I've seen a few approaches I'm able to take a better decision in the future.

Retrieving random content items (rows) from a SQL database in Orchard with HQL queries

by Oliver 22. February 2014 12:37

We're adding some Premium functionality to discoverize right now, and part of that is the so-called Premium block which is a showcase of six Premium entries. Now, choosing the right entries for that block is the interesting part: as long as we don't have six Premium entries to show, we want to fill up the left over space with some random entries that haven't booked our Premium feature, yet. Get random rows from SQL database There are plenty of articles and stackoverflow discussions on the topic of how to (quickly) retrieve some random rows from a SQL database. I wanted to get something to work simply and quickly, not necessarily high performance. Incorporating any kind of hand-crafted SQL query was really the last option since it would mean to get hold of an ISessionLocator instance to get at the underlying NHibernate ISession to then create a custom SQL query and execute it. Not my favorite path, really. Luckily, the IContentManager interface contains the method HqlQuery which returns an IHqlQuery containing these interesting details: /// <summary> /// Adds a join to a specific relationship. /// </summary> /// <param name="alias">An expression pointing to the joined relationship.</param> /// <param name="order">An order expression.</param> IHqlQuery OrderBy(Action<IAliasFactory> alias, Action<IHqlSortFactory> order); …and IHqlSortFactory contains a Random() method. This finally got me going! HQL queries in Orchard HQL queries are a great feature in (N)Hibernate that allow you to write almost-SQL queries against your domain models. I won't go into further detail here, but be sure to digest that! Orchard's IContentManager interface contains the method HqlQuery() to generate a new HQL query. Unfortunately, there's almost no usage of this feature throughout the whole Orchard solution. So let me document here how I used the HqlQuery to retrieve some random entries from our DB: // retrieve count items of type "Entry" sorted randomly return contentManager.HqlQuery()     .ForType("Entry")     .OrderBy(alias => alias.ContentItem(), sort => sort.Random())     .Slice(0, count)     .Select(item => item.Id); And one more: // retrieve <count> older items filtered by some restrictions, sorted randomly return contentManager.HqlQuery()     .ForPart<PremiumPart>()     .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),            expr => expr.Eq("Active", true))     .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),            expr => expr.Lt("BookingDateTime", recentDateTime))     .OrderBy(alias => alias.ContentItem(), sort => sort.Random())     .Slice(0, count)     .Select(item => item.Id); Even with the source code at hand, thanks to Orchard's MIT license, the implementation of this API in the over 600 lines long DefaultHqlQuery is not always straight-forward to put into practice. Most of all I was missing a unit test suite that would show off some of the core features of this API and I'm honestly scratching my head of how someone could build such an API without unit tests! Random() uses newid() : monitor the query performance The above solution was easy enough to implement once I've got my head around Orchard's HQL query API. But be aware that this method uses the newid() approach (more here) and thus needs to a) generate a new id for each row in the given table and b) sort all of those ids to then retrieve the top N rows. Orchard has this detail neatly abstracted away in the ISqlStatementProvider implementation classes. Here's the relevant code from SqlServerStatementProvider (identical code is used for SqlCe): public string GetStatement(string command) {     switch (command) {         case "random":             return "newid()";     }     return null; } For completeness, here's the generated SQL from the first query above (with variable names shortened for better readability): select content.Id as col_0_0_ from Test_ContentItemVersionRecord content     inner join Test_ContentItemRecord itemRec         on content.ContentItemRecord_id = itemRec.Id     inner join Test_ContentTypeRecord typeRec         on itemRec.ContentType_id = typeRec.Id where ( typeRec.Name in ('Entry') )     and content.Published = 1 order by newid() OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY This approach works well enough on small data sets but may become a problem if your data grows. So please keep a constant eye on all your random queries' performance. Happy HQL-ing!

IRIs and URIs; or: Internet Explorer does not decode encoded non-ASCII characters in its address bar

by Oliver 24. October 2013 23:03

Some facts about IE and its address bar IE can display non-ASCII characters in the address bar if you put them there by hand or click a link that contains such in unencoded form, e.g.ürther-wassersportclub. IE sends a request for the correctly encoded URL, which is Now, if you're in IE and click on the second link above, IE will not decode the URL back to the unencoded version – it will just keep the encoded URL in the address bar. If, instead, you're reading this page in FF or Chrome, the encoded URL above will be gracefully decoded into its unencoded counterpart. URIs and IRIs Disclaimer First off, let me tell you that I'm by no means an expert in this field. I'm trying to get my around URIs, IRIs, encodings and beautiful web sites and URLs just like probably half of the web developer world out there. So please, verify what you read here and correct me where I am mistaken. What the RFCs have to say By today, more than a handful of RFC documents have been published concerning URIs: RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax, which is the current Internet Standard (using IETF vocabulary) on URIs it updates RFC 1738, and it obsoletes RFCs 2732, 2396, and 1808 RFC 3987 - Internationalized Resource Identifiers (IRIs), which adds Unicode support for resource identifiers RFC 3986 states the following about a URI: A URI is an identifier consisting of a sequence of characters matching the syntax rule named <URI> in Section 3. See the examples section, or refer to Appendix A for the ABNF for URIs. RFC 3987 states the following about an IRI: An IRI is a sequence of characters from the Universal Character Set (Unicode/ISO 10646). In short, IRIs may contain Unicode characters while URI must not. Moreover, every URI is a valid IRI and every IRI can be encoded into a valid URI. Let's see an example again: IRI:ürther-wassersportclub URI: A great read on IRIs and their relationship to URIs can be found here by the W3C. Support for IRIs IRIs are not supported in HTTP as per RFC 2616. This implies that before requesting a resource identified by an IRI over HTTP it must be encoded as a URI first. This is what all mainstream browsers seem to do correctly – when you click onürther-wassersportclub and inspect the request sent from your browser you will see that it actually requests HTML5 support IRIs as URLs: Use IRIs today It looks like you can safely use IRIs in your HTML pages today already. And doing so will actually persuade IE into displaying the correct non-ASCII characters. So why don't we?

Setting up NGINX as Reverse Proxy for Camping.Info

by Oliver 24. June 2013 23:08

worker_processes 1; # Thanks to for helping me find out how many processor cores our VPS is using. Fixing the painful "24: Too many open files" error in NGINX When this error appears in your /var/etc/nginx/error.log file, you really want to do something about it. It basically tells you that some of your users are not being served content but instead receiving HTTP 500 errors generated by NGINX. Not a good thing! To investigate the open file descriptors on your linux (we're running NGINX on Ubuntu 12.04), I followed advice from this post: Linux: Find Out How Many File Descriptors Are Being Used. Then, to fix the low limits I found, 1024 and 4096 for the soft and hard limits, respectively, the post Set ulimit parameters on ubuntu provided a good walkthrough to changing those limits persistently, i.e. even after a reboot. But I somehow had the feeling that in case of NGINX there had to be a simpler solution. Turns out, there is. Let me introduce you to: worker_rlimit_nofile This thread in the NGINX forums contained the hint I needed: Re: Handling nginx's too many open files even I have the correct ulimit. I had actually posted to that thread before and received the helpful answer over a month ago, but I somehow hadn't got around to implementing it. Until today. So here's what you need to do: Set the worker_rlimit_nofile (in the main context) to something significant (I used 65536 = 2^16) instead of the default 1024 as soft and 4096 as the hard limit. Also set worker_connections to the same number to allow for a lot of parallel connections, both from clients and to upstream servers. Your nginx.conf file should now contain these lines: 1: user www-data; 2: worker_processes 1; 3: worker_rlimit_nofile 65536; 4: pid /var/run/; 5:  6: events { 7:     worker_connections 65536; ## default: 1024 8: # multi_accept on; 9: } Reload NGINX after configuration change In a shell, tell NGINX to reload the configuration. You don't need to restart the whole process to get a configuration change live. Just run: /usr/sbin/nginx -s reload or service nginx reload Now enjoy thousands and thousands of parallel connections to your proxy server :-)

Optimize Images for Your Website

by Oliver 17. June 2013 13:59

This is just a short post to draw your attention to a sweet tool I've just discovered: PNGGauntlet. It runs on Windows using the .NET 4 framework and is as easy to use as you could possibly wish. Also: it's completely free to use. Convert Your Existing PNGs For starters, we'll just convert some existing PNGs – can't really do any harm with that. In the Open File dialog, there's an option to filter for only .png files. You can choose many of them at once: If you provide an Output directory, the optimized files will be written to that destination. But: the tool also has the option to overwrite the original files, which is awesome if you use some kind of source control (and thus have a backup) and just want to get the job done. During my first run, using the 8 processing threads my CPU has to offer, … … I got savings from 3% to 27%: PNGGauntlet also tells me, that in total I saved 4,52 KB. If those were all images on your web site, that would be a not so bad improvement, especially when you get it investing about 2 min of your time and no extra expenses! Real Savings Running PNGGauntlet on the sprites that we use for Camping.Info, we were really surprised: out of 172 KB it saved us over 31%, a whole 54 KB! Now that's an improvement that on a slightly slower connection will already be noticeable. We'll definitely check the rest of our images for more savings. Convert Other Image Formats You can also choose to change your images format to PNG if you're in the mood. I tried converting all GIFs in the Orchard CMS Admin theme to PNGs and went from a total of 24 KB for 20 files to less than 17 KB with no loss of quality – an over 30% saving! Just beware that you'll need to change the file references in your project to pick up the new PNGs. Roundup Easy, fast and cheap (as in free) image optimization doesn't have to be magic anymore – today anyone can do it. Check out PNGGauntlet to try for yourself. There's really no excuse not to!

Orchard Harvest: Responsive Design

by Oliver 13. June 2013 15:30

These are just quick notes from one of the sessions at the Orchard Harvest conference currently taking place in Amsterdam. Fluid Grids bootstrap-responsive SimpleGrid 1140Grid FooTable hides columns, shows content in expandible elements Fluid Fonts FitText: makes text fit into header elements SlabText adjusts text sizes to make page look beautiful Toggle navigation using Twitter bootstrap: put .nav-collapse and .collapse on the nav element Frameworks Foundation (uses SASS) INK Pure by Yahoo: new, small - 5KB

Revoke Access to Applications using Google OpenID

by Oliver 9. May 2013 12:20

During automatic frontend testing, some of our tests recently broke, which were trying to connect a Google account to our new TeamReview application using OpenID. Those tests used to make sure that on Google's confirmation page the checkbox to remember my choice was unchecked. I'd like to show a screenshot of what it used to look like, but unfortunately, it seems as if that old page has died. On the new confirmation page, no such checkbox is available. This means that during a test run I cannot temporarily accept access to our application and later revoke that access by simply deleting my Google cookies. I now have to go to Google's Authorizing Applications & Sites page and revoke access manually. Just for the record.

HTTP Error 500.19 - Internal Server Error: 0x8007007b Cannot read configuration file

by Oliver 20. March 2013 15:24

While setting up a specification tests project for our new TeamReview tool, I was facing an HTTP 500.19 error when hosting our site in IIS Express. There are lots of questions on stackoverflow concerning this error, Microsoft has a whole page on it, but there is a whole bunch of suberrors that this error addresses. Error 0x8007007b: Cannot read configuration file Unfortunately, none of the above mentioned links contained or solved the specific error code I was seeing: Error Code    0x8007007b Config Error    Cannot read configuration file Config File    \\?\C:\Projects\_teamaton\teamreview\TeamReview.Specs\bin\Debug\..\..\..\TeamReview.Web\web.config After some reading, trying, fiddling, it appeared to me that maybe the path to the config file somehow messed up IIS Express. I admit that it was at least a bit unusual to use the parent directory dots. But it came from my test harness code where I wanted to use relative paths and used Path.Combine() to do that: var webPath = Path.Combine(Environment.CurrentDirectory, "..", "..", "..", "TeamReview.Web"); Pitfall: .. in path Well, it turns out IIS Express didn't like it. Once I called it with a cleaned up path string, everything just worked: "C:\Program Files (x86)\IIS Express\iisexpress.exe" /path:"C:\Projects\_teamaton\teamreview\TeamReview.Web" /port:12345 /systray:false So, watch out for your physical path values when using IIS Express! Use DirectoryInfo to navigate up your directory tree To get the correct path without using absolute paths but also avoiding the .. I used the DirectoryInfo class: var webPath = Path.Combine( new DirectoryInfo(Environment.CurrentDirectory).Parent.Parent.Parent.FullName, "TeamReview.Web");

About Oliver code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.