ASP.NET vNEXT, Docker, and the Future of Application Development and Deployment

by Oliver 3. November 2014 09:16

It's been an impressive year so far in the realms of software development and deployment, especially with ASP.NET vNEXT enabling per-application bundling of not only the .NET runtime but even the CLR needed for your app Docker standardizing the software delivery process by use of Linux containers (runs on Windows in a VM), (here's A Docker ‘Hello World' With Mono) and now Microsoft announcing native Docker Support for Windows Server Now, it took me a while to understand that we're witnesses of nothing less than a revolution in software development. The Vision: Build Your App Anywhere, Bundle It, and Run It Anywhere (Else) The clouds have been with us for a couple of years now and have started to provide real benefit beyond "moving your stuff to somewhere else". What's emerging now, with Docker and also the new ASP.NET runtime bundling, is something completely new: Application Containers. They don't have either specific OS requirements – Docker will be supported natively on Windows Server soon, ASP.NET runs on Linux today – nor need they a specific technology stack installed on the target machine (as with PaaS) because they bring all of the necessary runtime along. But they're also not large VMs bundled with your application, which carry a significant maintenance overhead (when using IaaS). Virtualized application containers are the sweet spot between IaaS and PaaS. Go ahead and read that post – it's eye-opening.

Visual Studio 2013 Hidden Gems

by Oliver 17. October 2014 23:13

This post is one of several summarizing some of the sessions I attended during the .NET Developer Days conference in October 2014. Check out the rest of them. Here are my notes from a whole day of sessions diving deep into Visual Studio and its possibilities, lead by Kate Gregory: Window positioning When drag'n'dropping windows you can drop them in any place you like, even in a place where VS suggests to dock it to another window group, by holding down the CTRL key and then releasing the mouse button. Using the Start Page Probably 3% of all developers use it but it's gotten better over the years. You can now pin projects to it so they won't fall of the most recently used list, you can remove unneeded projects by right clicking or open its folder if you forgot where you keep it. Also, the start age hides itself once you open a project or a file so you don't have to close it by hand anymore. Navigating Code If you want to go to the definition of a symbol just get your cursor on it and press F12 which is the shortcut for Go To Definition. That will open the file that contains the definition of given symbol. Now, if you want to drill down into a deeper hierarchy and don't care for the intermediate definitions, give Alt+F12 a try – it's the shortcut for Peek Definition and it will open the definition of the given symbol in an iframe type of window right inside the code you're looking at. You can then use that window to follow further definitions without leaving the current point of interest. [Can't find that menu item in the Express edition, though.] Also, give bookmarks a try! There's a bookmark manager where you can give your bookmarks a name, group them into folders and the like. Quite helpful to quickly find your way around a large codebase or for presentations. Finding things There's great inline find window by now in VS that you can control from your keyboard in no time. Use Ctrl+F to open it with the word prefilled that your cursor is currently on or use Ctrl+F3 to search for the next occurrence of the word your cursor is on. This little tool is really worth getting to know well because it can save you a lot of time when looking for stuff or replacing it. Have you noticed the search text box at the top of the Solution Explorer? There's even a shortcut to get there so you don't have to take your hand off the keyboard. Be prepared to find even more search boxes here and there, e.g. the Error List has one, too! Application Lifecycle Management (ALM) Visual Studio Online is a new one-stop solution for hosting projects and collaborating on them in the cloud. It basically offers cloud-based TFS instances. The basic plan is free for up to 5 users in a project with an unlimited number of stakeholders, who are allowed view burndown charts, backlogs, Kanban, and task boards, and may even create new Work Items. It supports, of course, TFVC but also Git for source control. In Visual Studio, use the Team Explorer window to work with your remote TFS, e.g. your Visual Studio Online account, but you can choose to manage your project through a web browser just as well. There's powerful work item editor available online, have a look and take a minute to grasp what all it offers: I'd call it impressive. There's really a ton of features here, and no doubt there are other tools out there to do the same thing. What really cuts it for me: Visual Studio Online is free for up to 5 users and an unlimited number of stakeholders. The integration with Visual Studio is seamless. [You can do pretty much all of the management work either in VS or online.] Sign out of VisualStudio.com If you're logged into VisualStudio.com you can log out by opening the drop-down menu next to your login name, choosing "Account settings…" and there clicking "Sign out". Beware that you have to be logged in with a Microsoft account if you want to use the Express version for longer than 30 days. Debugging I'll just put stuff into a list here for better readability: Have you  met the Autos window? Seems not to be included in the Express version but when hitting a breakpoint it offers insight into all variables used on the current line, the previous line and after exiting a function, shows its return value even if you didn't assign to any variable! The Locals window captures the values of all variables defined in the current scope without the need to add them to the Watch window. Press the CTRL key to temporarily hide the variable inspector popup window: -> Pin values from the above window so you'll keep their values in view – even during the next debugging session! Set your cursor on a line of code and choose Run To Cursor from the context menu to continue running your code up until the line with the cursor. Wow! Or choose Set Next Statement to skip all code from the current breakpoint onwards and jump to the selected line. Wow²! Edit + Continue is also great but works only in 32-bit mode :-| IntelliTrace (in the Ultimate edition) allows you to capture execution traces of your software on a client machine and debug (through replaying) the same set of instructions inside your local VS – Kate Gregory called it Time Travel Debugging ;-) That's it from the first day. Happy developing!

.NET Developer Days in Wroclaw

by Oliver 15. October 2014 19:39

I'm currently attending the first .NET Developer Days conference in Wrocław, Poland, and will put up a few posts with my notes from some of the sessions I was able to attend. The conference took place from 14.10. to 16.10.2014 in the Wrocław Stadium. Here's a list of all posts (I'll update the links as soon as I finish a given post): Visual Studio 2013 Hidden Gems ASP.NET vNEXT SQL Server Data Tools: An Intro Continuous Deployment WebAPI, OData There's already been a lot of input and the third day is still ahead of me! I hope I'll be able to update the above list soon. Happy Coding!

Pieces of C# – long and short

by Oliver 9. August 2014 12:52

Today, I found this dusty piece of code in our code base: Stone-age version public string GetIframeIds() {     var result = new StringBuilder();     var first = true;     foreach (var iframe in _iframes)     {         if (!first) result.Append(',');         else first = false;         result.Append("'" + iframe.ClientID + "'");     }     return result.ToString(); } … and just had to rewrite it to this: Updated version public string GetIframeIds() {     return string.Join(",", _iframes.Select(ifr => "'" + ifr.ClientID + "'")); } I couldn't bear but run some micro-performance test on these code snippets since StringBuilder is usually quite fast. I ran each of the snippets with an _iframes length of 30 in a loop of 10.000 iterations and yes, the first version is faster with 215ms vs. 360ms. But then, in production I run that code block only once per request, not 10.000 times as in the test. Spending 21ns or 36ns in that method won't make any significant difference, especially when looking at request execution times of beyond 100ms. Why should you or I care? The second code block is arguably easier to read, quicker to write, and harder to get wrong. Happy coding!

Learning Powershell

by Oliver 30. May 2014 21:34

Today, I finally decided that I want to get to grips with PowerShell and have it available in my toolbox for those everyday developer tasks. For a fresh start, I wanted to make sure I'm running the latest and greatest of PowerShell, but how do I find out which version I have installed? What version am I running? Just fire up a PowerShell instance and type $psversiontable or $host.version: PS C:\Windows\system32> $psversiontable Name                           Value ----                           ----- PSVersion                      4.0 WSManStackVersion              3.0 SerializationVersion           1.1.0.1 CLRVersion                     4.0.30319.18444 BuildVersion                   6.3.9600.16406 PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0} PSRemotingProtocolVersion      2.2 PS C:\Windows\system32> $host.version Major  Minor  Build  Revision -----  -----  -----  -------- 4      0      -1     -1 Actually, when I ran this I didn't have the 4.0 version installed yet. So where did I get it? How to install Powershell 4.0 (the newest version as of mid 2014)? Go here and choose the right link for you: How to Install Windows PowerShell 4.0. That's it. Make use of great tooling: use the ISE Last but not least, especially for those of you who like me are just getting started, make sure you're using the great Integrated Scripting Environment (ISE) that comes bundled with PowerShell: Now, get scripting!

GIT tip: fast-forward local branch to the head of its remote tracking branch without checking it out

by Oliver 6. February 2014 00:23

Not much else to say than what's mentioned in the title. I come across the need to do so mostly before deployments from my machine where I want to update my local master branch to the HEAD of the remote master branch. Here's how to do that: 1: git fetch origin master:master Thank you stackoverflow and Cupcake!

Productivity boost with MSBuild: use /maxcpucount

by Oliver 28. January 2014 21:24

This is embarrassing. For the n-th time during the past couple of years I've felt an unease waiting for our projects (read: solutions) to compile. I kept seeing this: This is MSBuild using 1 (!), yes, one!, of the 8 CPU cores I've sitting in my machine to get my work done. What about the other 7? Why don't you use them, MSBuild? With that single core, currently my simple local build of our project discoverize takes around 36 seconds: Tell MSBuild to use all cpu cores Well, it's as easy as adding /m or /maxcpucount to your msbuild command line build to boost your build times:   Down to 8 seconds with 3 additional characters: [space]/m. That's easily a 4.5 times improvement! Your mileage may vary Of course, every project is different, so your speed increase might be higher or a lot lower than what I've seen. But it's an easy measure to get at least some improvement in build times with very little effort. Don't trust Visual Studio on that one, though – the solution builds slowly there, still. For reference, let me tell you, that the /maxcpucount switch can actually take a parameter value like so: /maxcpucount:4. So if you lots of other stuff going on in the background or I don't know for what reason, really, you can limit the number of cpus used by MSBuild. Props to the Orchard team for a highly parallelizable build One of the specifics of the Orchard source code that's the base for discoverize is the very loose coupling between the 70+ projects in the solution. This allows MSBuild to distribute the compilation work to a high number of threads because there are almost no dependencies between the projects that MSBuild would have to respect. Great work! Happy building!

What's Wrong With Our Specification By Example Tests

by Oliver 25. July 2013 11:44

We've been working on our customizable portal software discoverize for about two years now using Orchard CMS. From the beginning we were convinced to use Specification By Example to build up a live documentation of the functionality of our software. This has been very important to us since we plan to drive tens if not hundreds of portals using the same code base. Last year, I've already written about how we do our integration testing. I've also written about why we do browser based testing as opposed to some lower level testing that is in place e.g. in the Orchard.Specs project inside the Orchard source solution. But here we are, a year has passed, and we're still not happy with our approach. Problems we're facing The biggest problem is really that to write an acceptance test for a new feature takes nearly as much time as it takes for the feature to implement. This might be tolerable for mission critical software used in banks or space shuttles, but it's just over the top for a consumer website. On the other hand, we want an insurance that the software we ship contains as few bugs as possible. Development speed down by 50% The websites we generate using our software are quite complex and interactive. This has repeatedly posed challenges on writing robust browser based tests. We chose Coypu over Selenium because it has a cleaner API and handles asynchronous postbacks really well, but its API has still been limiting to us so we regularly find ourselves hacking around those limitations. All of which has to be tested, of course, which takes a noticeable amount of time. Another problem we face is that we need to change our HTML to accommodate for testing. For example, we keep adding id attributes to elements just so we have an easy and reliable way of accessing those elements in our specification tests. This seems not right but that's how we get stuff to work. Test Execution speed too high for continuous feedback The spec test execution time is too high. We're talking about 50-90 seconds per test case if they pass, add another 20-30 secs if they fail (because of the browser automation timeouts). That's a real bummer because it's so easy to loose focus during that time. Additionally, before executing a test suite (or a single test, if that's what you want) we compile and publish our code to a separate destination which the specs run on. This process takes another 45-50 secs which is ok if you run all specs at once but adds significant overhead when working on a single acceptance test. Related to this, we keep having trouble in quickly finding the cause for a broken test because not all of our commits are being pushed through the acceptance tests pipeline since a single run sometimes takes longer than the time between commits. This makes finding the cause for a breaking test harder. Test Fragility keeps us busy Another reoccurring problem are breaking tests due to UI changes. This might be a simply change of CSS, HTML or a JavaScript snippet, but it happens all the time. Also, there are usually at least a couple of tests that break simultaneously because they reuse certain steps which is not only annoying but often misleading as to where the error really comes from. Demoralization All of the above lead to decreases morale both in writing new tests and in fixing broken ones. Which in turn adds even more overhead to the development process. Looking for success stories This post came into existence because we believe in Specification by Example and we also believe that other teams are successfully running integration tests, even by the use of an automated browser. If you're part of such a team, or have any other valuable feedback to share, please do so in the comments. Happy testing!

Git – How to quickly fast-forward a local branch to the HEAD of its remote tracking branch

by Oliver 19. June 2013 11:39

The scenario I'm facing quite regularly during development is that I want to change to a different feature branch that really someone else is working on to do some maintenance or the like. I know that I can just fast-forward my local branch to the current HEAD of the corresponding remote branch, but using e.g. a GUI such as the wonderful GitExtensions, I have to first check out my local branch and then merge the origin's head into it. This might not only take longer than needed but sometimes lead to problems when the solution is currently open in Visual Studio (depending on the differences between the two branches I'm switching between). Of course, I wasn't the first to want to get around unnecessary overhead, and this stackoverflow answer helped me find a solution to my "problem". For a branch named "design" simply do the following on a command line inside your repository: git branch -f design-2 origin/design-2 Git will answer with something like this: Branch design-2 set up to track remote branch design-2 from origin. And that's that. Beware: as stated in a comment of the above mentioned answer, … Just be very careful not to do that unless you've made absolutely sure the merge would be a fast-forward! Happy gitting!

SpecFlow Step Definition with Optional Parameter

by Oliver 13. May 2013 11:41

Today, this question came up on the SpecFlow Google Group: Assuming I would like to define in Gherkin the following: 1. When I send some argument xxx with parameter aaa and another parameter bbb 2. When I send some argument xxx with parameter aaa And I would like to have only one reusable function, something like this: [When(@"I send some argument (.*) with parameter (.*) and another parameter (.*)")] public void foo(string arg, string paramA, string paramB) {   // check if paramB is null and do something } I am aware of the table feature (pipe separated values) but I would like to stick with this text-alike syntax. We've encountered this use case several times in the past (also avoiding the table syntax) and used to solve it by delegating the shorter version to the longer one but I decided to go see if I can find a more elegant solution. Matching the steps The first step at matching both steps was to simply match the first step. Since version 1.9 SpecFlow has this wonderful syntax highlighting in .feature files which helps identify unbound steps: We can see, that our first pattern is too greedy and matches the second step but not the way we need. Changing the regular expression for the first parameter to something more restrictive, allows us to restrict the match to only the first step (notice that the second step has been colored purple to notify us of the fact that there is no matching step definition, yet): The regex ([^ ]*) we use here means that we match all characters that are not spaces 0 to n times, thus denying the match of the second step because of the space character following the argument aaa. Sometimes, though, you also need to match spaces in arguments and that's when we use a slightly modified version like this:  "([^"]*)"  which means: match a quote, then match everything but a quote 0 to n times and save this match as a reference, and then match another quote. In a verbatim string (prefixed by the @ sign) this will look like this: Note, that now you'll have to enclose your spaced string value in quotes, though, but you can still use the same method to put that step attribute on. Now, let's go for the second argument. Using a .NET Optional Parameter My first try was to add an optional parameter to the method we already have and provide it with a default argument like this: Unfortunately, SpecFlow complains that for the first step with only one argument the matching method needs to have only one parameter. I thought that the compiler would generate two methods here, one with and one without the optional parameter so that at runtime it could pick the right one depending on which parameters were provided with a  value. It turns out that this is not so. Seems that IL code for a method with optional parameters contains only one method, as well, as per this article: Intermediate language for optional parameter method: IL .method private hidebysig static void Method([opt] int32 'value', [opt] string name) cil managed { .param [1] = int32(1) .param [2] = string('Perl') .maxstack 8 L_0000: ldstr "value = {0}, name = {1}" L_0005: ldarg.0 L_0006: box int32 L_000b: ldarg.1 L_000c: call void [mscorlib]System.Console::WriteLine(string, object, object) L_0011: ret } That's why SpecFlow complains. Solution: Use Two Methods It looks like there is not direct solution to the problem that would require only a single method. The solution we employ seems to be all you can do about it, at least right now with SpecFlow 1.9. Which would be to use (at least) two separate methods, one of which delegates its execution to the other, more general one: Happy spec'ing!

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.