Remove The Padding From The Google Map API v3 fitBounds() Method

by Oliver 4. July 2014 21:56

In our customizable web portal platform discoverize we offer searching for results using a Google Map. In a recent iteration, we were trying to improve the overall map usage experience, and one thing we wanted to do was to zoom into the map as far as possible for all results to still appear on the map.

map.fitBounds(map.getBounds()); – does not do what you would like it to

The natural choice to achieve that would be to use the fitBounds method on the Map object with the bounding rectangle for the coordinates of all results. Unfortunately, though, Google chose to add a non-configurable 45px margin around that bounding box so that in a lot of cases the map appears to be zoomed out too far for what would be possible. That's why map.fitBounds(map.getBounds()); will zoom the map out!

Zoom in if you can

After a bit of searching I found a workaround on this Google Groups thread: map.fitBounds(map.getBounds()) zooms out map. The idea behind the solution provided there is to check whether the bounds to fit on the map wouldn't still fit using a higher zoom level and if yes, apply that zoom level. Since I had some problems with the code from the thread I reworked it slightly and now have this:

function myFitBounds(myMap, bounds) {
    myMap.fitBounds(bounds); // calling fitBounds() here to center the map for the bounds

    var overlayHelper = new google.maps.OverlayView();
    overlayHelper.draw = function () {
        if (!this.ready) {
            var extraZoom = getExtraZoom(this.getProjection(), bounds, myMap.getBounds());
            if (extraZoom > 0) {
                myMap.setZoom(myMap.getZoom() + extraZoom);
            }
            this.ready = true;
            google.maps.event.trigger(this, 'ready');
        }
    };
    overlayHelper.setMap(myMap);
}

function getExtraZoom(projection, expectedBounds, actualBounds) {

    // in: LatLngBounds bounds -> out: height and width as a Point
    function getSizeInPixels(bounds) {
        var sw = projection.fromLatLngToContainerPixel(bounds.getSouthWest());
        var ne = projection.fromLatLngToContainerPixel(bounds.getNorthEast());
        return new google.maps.Point(Math.abs(sw.y - ne.y), Math.abs(sw.x - ne.x));
    }

    var expectedSize = getSizeInPixels(expectedBounds),
        actualSize = getSizeInPixels(actualBounds);

    if (Math.floor(expectedSize.x) == 0 || Math.floor(expectedSize.y) == 0) {
        return 0;
    }

    var qx = actualSize.x / expectedSize.x;
    var qy = actualSize.y / expectedSize.y;
    var min = Math.min(qx, qy);

    if (min < 1) {
        return 0;
    }

    return Math.floor(Math.log(min) / Math.LN2 /* = log2(min) */);
}

Replace map.fitBounds(bounds) with myFitBounds(map, bounds)

That's all you have to do to zoom in as far as possible while keeping your bounds on the map.

Happy coding!

Learning Powershell

by Oliver 30. May 2014 21:34

Today, I finally decided that I want to get to grips with PowerShell and have it available in my toolbox for those everyday developer tasks. For a fresh start, I wanted to make sure I'm running the latest and greatest of PowerShell, but how do I find out which version I have installed?

What version am I running?

Just fire up a PowerShell instance and type $psversiontable or $host.version:

PS C:\Windows\system32> $psversiontable

Name                           Value
----                           -----
PSVersion                      4.0
WSManStackVersion              3.0
SerializationVersion           1.1.0.1
CLRVersion                     4.0.30319.18444
BuildVersion                   6.3.9600.16406
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0}
PSRemotingProtocolVersion      2.2

PS C:\Windows\system32> $host.version

Major  Minor  Build  Revision
-----  -----  -----  --------
4      0      -1     -1

Actually, when I ran this I didn't have the 4.0 version installed yet. So where did I get it?

How to install Powershell 4.0 (the newest version as of mid 2014)?

Go here and choose the right link for you: How to Install Windows PowerShell 4.0. That's it.

Make use of great tooling: use the ISE

Last but not least, especially for those of you who like me are just getting started, make sure you're using the great Integrated Scripting Environment (ISE) that comes bundled with PowerShell:

image

Now, get scripting!

Delete a Large Number of Rows from a Table in SQL Server

by Oliver 28. May 2014 12:09

Recently, we had to make some space available in one of our SQL Express instances that was getting close to its 10 GB limit of stored data, so I set out to delete some old data from two of our largest tables. One contained about half a million rows, the other a bit over 21 million.

Simple Deletion Would Take… Forever

The simplest sql statement to delete all rows that were created before 2012 would be the following:

  1. DELETE FROM [dbo].[Message] WHERE DateCreated < '20120101'

I can't even tell you how long this took because at 14 minutes I just cancelled the query execution (which took another 7 minutes to finish). This was the table with less than 500,000 rows where we wanted to delete a bit more than 200,000 rows.

Break Delete Operation Into Chunks

Searching for a solution to the problem, I came across this blog post on breaking large delete operations into chunks. It shows in good detail how the simple version above behaves against running a loop of a few tens of thousand deletes per iteration. An interesting aspect I hadn't thought of at that point was the transaction log growth that can become a problem with large delete operations. Running a loop allows you to do a log backup (in full recovery mode) or a checkpoint (in simple mode) at the end of each iteration so that the log will grow much more slowly.

Unfortunately, though, this didn't help with the execution time of the delete itself, as you can also see from the graphs presented in above post.

Disable Those Indexes!

It turns out, our [Message] table had six non-clustered indexes on them which all had to be written to for every row that was deleted. Even if those operations are fast, their processing time will add up over a few hundred thousand iterations. So let's turn them off! In fact, let's turn only those off that won't be used during out delete query. [We have one index on the DateCreated column which will be helpful during execution.]

This stackoverflow answer shows how to create some dynamic SQL to disable all non-clustered indexex in a database. I've modified it slightly to disable only indexes of a given table:

Disable/Enable Table Indexes
  1. DECLARE @table AS VARCHAR(MAX) = 'Message';
  2. DECLARE @sqlDisable AS VARCHAR(MAX) = '';
  3. DECLARE @sqlEnable AS VARCHAR(MAX) = '';
  4.  
  5. SELECT
  6.     @sqlDisable = @sqlDisable + 'ALTER INDEX ' + idx.name + ' ON '
  7.                     + obj.name + ' DISABLE;' + CHAR(13) + CHAR(10),
  8.     @sqlEnable = @sqlEnable + 'ALTER INDEX ' + idx.name + ' ON '
  9.                     + obj.name + ' REBUILD;' + CHAR(13) + CHAR(10)
  10. FROM sys.indexes idx
  11. JOIN sys.objects obj
  12.     ON idx.object_id = obj.object_id
  13. WHERE idx.type_desc = 'NONCLUSTERED'
  14.     AND obj.type_desc = 'USER_TABLE'
  15.     AND obj.name = @table;
  16.  
  17. RAISERROR(@sqlDisable, 0, 1) WITH NOWAIT;
  18. RAISERROR(@sqlEnable, 0, 1) WITH NOWAIT;
  19. --EXEC(@sqlDisable);
  20. --EXEC(@sqlEnable);

Now, with those indexes disabled, the simple delete operation took a lot less than a minute! Even in the case of our 21 million rows table, deleting 7 million rows took only 1:02 on my machine. Of course, after deleting the unwanted rows, you need to re-enable the indexes again which took another minute, but all in all I'm happy with the result.

Copy Data to New Table and Drop Old Table

One other way of deleting rows that I've used in combination with changing the table schema at the same time is the following:

  • use a temporary table into which you copy all the rows you want to keep (the schema of which I modified to meet our new needs)
  • delete the original table
  • rename the temporary table to the original table's name
  • recreate all indexes you had defined before

This is basically what SSMS generates for you when you change the schema of a table, except for the indexes – you have to recreate them yourself.

As you can imagine, this approach becomes faster and creates smaller transaction log footprint with a growing amount of data to delete. It won't have any benefit if you delete less than half of the table's rows.

Choose the right tool for the job

There are quite a few other approaches and tips out there on how to speed up your deletion process. It depends a lot on your concrete situation which of those will actually help you get your deletion job done faster. I had to experiment quite a bit to find the sweet spot but now that I've seen a few approaches I'm able to take a better decision in the future.

Encrypting Passwords and Keys in web.config

by Anton 19. April 2014 19:18

We wanted to encrypt our passwords which we store in the web.config of our Webapplication. Most of the WorldWideweb pointed to the use of aspnet_regiis.exe: http://msdn.microsoft.com/en-us/library/53tyfkaw(v=vs.100).aspx We want to use the encrypted web.config on a few machines, so we need to import the decryption keys on those machines.

I pretty much used the walkthrough provided by Microsoft.

  1. Ceate a custom RSA key container: aspnet_regiis -pc "CampingInfo" –exp
  2. Grant the application access to the keys: aspnet_regiis -pa "CampingInfo" "NT AUTHORITY\NETWORK SERVICE". The ASP.NET identity can be found via creating and calling a page “Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name);”
  3. Add a configuration provider to the web.config:
  4. <configuration>
    <configProtectedData>
    <providers>
    <add name="CampingInfoProvider"
    type="System.Configuration.RsaProtectedConfigurationProvider"
    keyContainerName="CampingInfo"
    useMachineContainer="true" />
    </providers>
    </configProtectedData>
    ...
    </configuration>

  5. Put the to be encrypted settings in a custom section in the web.config:
    <configuration>
    <configSections>
    <section name="secureAppSettings" type="System.Configuration.NameValueSectionHandler" />
    </configSections>
    <secureAppSettings>
    <add key="somepassword" value="xyz" />
    </secureAppSettings>
    ...
    </configuration>

  6. Encrypt the custom section: aspnet_regiis -pef "secureAppSettings" "C:\<path to dirctory where web.config resides>" -prov "CampingInfo"
  7. Export the RSA key container: aspnet_regiis -px "CampingInfo" "c:\keys.xml" -pri
  8. Copy the xml file to a second server which runs the same application (with the same, now partially encrypted web.config).
  9. Import the RSA key container on the second server: aspnet_regiis -pi "CampingInfo" "c:\keys.xml"
  10. Grant the application on the second server access to the keys as in 2. (Identity may be different.)

enjoyed the post?

Tags:

Retrieving random content items (rows) from a SQL database in Orchard with HQL queries

by Oliver 22. February 2014 12:37

We're adding some Premium functionality to discoverize right now, and part of that is the so-called Premium block which is a showcase of six Premium entries. Now, choosing the right entries for that block is the interesting part: as long as we don't have six Premium entries to show, we want to fill up the left over space with some random entries that haven't booked our Premium feature, yet.

Get random rows from SQL database

There are plenty of articles and stackoverflow discussions on the topic of how to (quickly) retrieve some random rows from a SQL database. I wanted to get something to work simply and quickly, not necessarily high performance. Incorporating any kind of hand-crafted SQL query was really the last option since it would mean to get hold of an ISessionLocator instance to get at the underlying NHibernate ISession to then create a custom SQL query and execute it. Not my favorite path, really. Luckily, the IContentManager interface contains the method HqlQuery which returns an IHqlQuery containing these interesting details:

/// <summary>
/// Adds a join to a specific relationship.
/// </summary>
/// <param name="alias">An expression pointing to the joined relationship.</param>
/// <param name="order">An order expression.</param>
IHqlQuery OrderBy(Action<IAliasFactory> alias, Action<IHqlSortFactory> order);

…and IHqlSortFactory contains a Random() method. This finally got me going!

HQL queries in Orchard

HQL queries are a great feature in (N)Hibernate that allow you to write almost-SQL queries against your domain models. I won't go into further detail here, but be sure to digest that!

Orchard's IContentManager interface contains the method HqlQuery() to generate a new HQL query. Unfortunately, there's almost no usage of this feature throughout the whole Orchard solution. So let me document here how I used the HqlQuery to retrieve some random entries from our DB:

// retrieve count items of type "Entry" sorted randomly
return contentManager.HqlQuery()
    .ForType("Entry")
    .OrderBy(alias => alias.ContentItem(), sort => sort.Random())
    .Slice(0, count)
    .Select(item => item.Id);

And one more:

// retrieve <count> older items filtered by some restrictions, sorted randomly
return contentManager.HqlQuery()
    .ForPart<PremiumPart>()
    .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),
           expr => expr.Eq("Active", true))
    .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),
           expr => expr.Lt("BookingDateTime", recentDateTime))
    .OrderBy(alias => alias.ContentItem(), sort => sort.Random())
    .Slice(0, count)
    .Select(item => item.Id);

Even with the source code at hand, thanks to Orchard's MIT license, the implementation of this API in the over 600 lines long DefaultHqlQuery is not always straight-forward to put into practice. Most of all I was missing a unit test suite that would show off some of the core features of this API and I'm honestly scratching my head of how someone could build such an API without unit tests!

Random() uses newid() : monitor the query performance

The above solution was easy enough to implement once I've got my head around Orchard's HQL query API. But be aware that this method uses the newid() approach (more here) and thus needs to a) generate a new id for each row in the given table and b) sort all of those ids to then retrieve the top N rows. Orchard has this detail neatly abstracted away in the ISqlStatementProvider implementation classes. Here's the relevant code from SqlServerStatementProvider (identical code is used for SqlCe):

public string GetStatement(string command) {
    switch (command) {
        case "random":
            return "newid()";
    }
    return null;
}

For completeness, here's the generated SQL from the first query above (with variable names shortened for better readability):

select content.Id as col_0_0_
from Test_ContentItemVersionRecord content
    inner join Test_ContentItemRecord itemRec
        on content.ContentItemRecord_id = itemRec.Id
    inner join Test_ContentTypeRecord typeRec
        on itemRec.ContentType_id = typeRec.Id
where ( typeRec.Name in ('Entry') )
    and content.Published = 1 order by newid()
OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY

This approach works well enough on small data sets but may become a problem if your data grows. So please keep a constant eye on all your random queries' performance.

Happy HQL-ing!

GIT tip: fast-forward local branch to the head of its remote tracking branch without checking it out

by Oliver 6. February 2014 00:23

Not much else to say than what's mentioned in the title. I come across the need to do so mostly before deployments from my machine where I want to update my local master branch to the HEAD of the remote master branch. Here's how to do that:

   1: git fetch origin master:master

Thank you stackoverflow and Cupcake!

Productivity boost with MSBuild: use /maxcpucount

by Oliver 28. January 2014 21:24

This is embarrassing. For the n-th time during the past couple of years I've felt an unease waiting for our projects (read: solutions) to compile. I kept seeing this:

image

This is MSBuild using 1 (!), yes, one!, of the 8 CPU cores I've sitting in my machine to get my work done. What about the other 7? Why don't you use them, MSBuild? With that single core, currently my simple local build of our project discoverize takes around 36 seconds:

image

Tell MSBuild to use all cpu cores

Well, it's as easy as adding /m or /maxcpucount to your msbuild command line build to boost your build times:

image image

Down to 8 seconds with 3 additional characters: [space]/m. That's easily a 4.5 times improvement!

Your mileage may vary

Of course, every project is different, so your speed increase might be higher or a lot lower than what I've seen. But it's an easy measure to get at least some improvement in build times with very little effort. Don't trust Visual Studio on that one, though – the solution builds slowly there, still.

For reference, let me tell you, that the /maxcpucount switch can actually take a parameter value like so: /maxcpucount:4. So if you lots of other stuff going on in the background or I don't know for what reason, really, you can limit the number of cpus used by MSBuild.

Props to the Orchard team for a highly parallelizable build

One of the specifics of the Orchard source code that's the base for discoverize is the very loose coupling between the 70+ projects in the solution. This allows MSBuild to distribute the compilation work to a high number of threads because there are almost no dependencies between the projects that MSBuild would have to respect. Great work!

Happy building!

Where would *you* put your job offer?

by Oliver 28. January 2014 01:20

Last year, we were looking for a developer to strengthen our team and we put the job offer on our homepage. Nothing fancy there. Booking.com found a much more interesting place to put their job offer without much noise. Look at this Fiddler screenshot:

image

This response is from today so if you're looking for a job in Amsterdam, go get it!

Orchard CMS - ContentPart will not update if made invisible through placement

by Oliver 17. December 2013 22:01

Today we decided that auto-updating our entries' urls when their names change is a rather good idea. Our entries are ContentItems consisting of our custom EntryPart, an AutoroutePart, and some more that are not important here. I thought it would be a matter of minutes to get this user story done. Simply set the correct Autoroute setting inside a migration step and it should work:

public int UpdateFrom9() {
     ContentDefinitionManager.AlterTypeDefinition(
         "Entry", cfg => cfg.WithPart(
             "AutoroutePart",
             acfg => acfg.WithSetting("AutorouteSettings.AutomaticAdjustmentOnEdit", "true")));
     return 10; }

Well, it didn't.

Placement affects ContentPart updates

In discoverize, we offer a distinct management area (totally separated from the Admin area) where owners of entries can edit their own entry's data but not much more. The decision which url should point to their respective entry is one that we don't want them to make so we simply never rendered the AutoroutePart's edit view using the following line in our management modules placement.info file:

<Place Parts_Autoroute_Edit="-" />

It turned out that this will cause Orchard to skip running through the POST related Editor() overload in the AutoroutePartDriver because in the ContentPartDriver.UpdateEditor() method there is an explicit check for the location of the currently processed part being empty:

if (string.IsNullOrEmpty(location) || location == "-") {
     return editor; }

Because of the above check, the handling of the AutoroutePart of the currently saved entry is stopped right there and the code that is responsible for triggering the url regeneration based is never called.

Updating ContentParts despite Invisible Edit View

The solution is simple – thanks to Orchard's phenomenal architecture – and consists of two steps:

  1. Make the AutoroutePart's edit view visible in the placement.info:
    <Place Parts_Autoroute_Edit="Content:after"/>
    
  2. Remove all code from the AutoroutePart's edit view:
    image

With this in place, Orchard won't enter the if (location == "-") condition above but instead will execute the url regeneration we were after in the first place.

Beware of Unrendered Views

So, Orchard connects certain behavior to the visibility of our parts' rendered views. Not what I'd call intuitive, but at least now we know.

Happy Coding!

IRIs and URIs; or: Internet Explorer does not decode encoded non-ASCII characters in its address bar

by Oliver 24. October 2013 23:03

Some facts about IE and its address bar

IE can display non-ASCII characters in the address bar if you put them there by hand or click a link that contains such in unencoded form, e.g. http://marinas.info/marina/fürther-wassersportclub.

IE sends a request for the correctly encoded URL, which is http://marinas.info/marina/marina/f%C3%BCrther-wassersportclub.

Now, if you're in IE and click on the second link above, IE will not decode the URL back to the unencoded version – it will just keep the encoded URL in the address bar. If, instead, you're reading this page in FF or Chrome, the encoded URL above will be gracefully decoded into its unencoded counterpart.

URIs and IRIs

Disclaimer

First off, let me tell you that I'm by no means an expert in this field. I'm trying to get my around URIs, IRIs, encodings and beautiful web sites and URLs just like probably half of the web developer world out there. So please, verify what you read here and correct me where I am mistaken.

What the RFCs have to say

By today, more than a handful of RFC documents have been published concerning URIs:

RFC 3986 states the following about a URI:

A URI is an identifier consisting of a sequence of characters matching the syntax rule named <URI> in Section 3.

See the examples section, or refer to Appendix A for the ABNF for URIs.

RFC 3987 states the following about an IRI:

An IRI is a sequence of characters from the Universal Character Set (Unicode/ISO 10646).

In short, IRIs may contain Unicode characters while URI must not. Moreover, every URI is a valid IRI and every IRI can be encoded into a valid URI. Let's see an example again:

A great read on IRIs and their relationship to URIs can be found here by the W3C.

Support for IRIs

IRIs are not supported in HTTP as per RFC 2616. This implies that before requesting a resource identified by an IRI over HTTP it must be encoded as a URI first. This is what all mainstream browsers seem to do correctly – when you click on http://marinas.info/marina/marina/fürther-wassersportclub and inspect the request sent from your browser you will see that it actually requests http://marinas.info/marina/marina/f%C3%BCrther-wassersportclub.

HTML5 support IRIs as URLs: http://www.w3.org/html/wg/drafts/html/CR/infrastructure.html#urls.

Use IRIs today

It looks like you can safely use IRIs in your HTML pages today already. And doing so will actually persuade IE into displaying the correct non-ASCII characters. So why don't we?

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for jQuery. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our daughter.

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in c# and work with mvc, orchard, specflow, coypu and nhibernate. I enjoy beach volleyball, board games and Coke.