How To Set Up Additional TeamCity Build Agents

by Oliver 17. April 2019 09:00

This step-by-step guide is based on TeamCity Professional 2018.2.4 (build 61678). 1. On your TeamCity server, open the web UI and click: Administration Install Build Agents Windows Installer 2. Run the installer 3. Choose a folder on the disk to extract to I chose C:\BuildAgent3. 4. Do NOT check the Windows Service checkbox I know this sounds counter intuitive - but it will save you from losing your default Build Agent that is already running. 5. Set your Build Agent's properties Set the serverUrl to the publicly accessible URL of your TeamCity instance. Give a distinctive name to your new Build Agent - I like to just number them. Optional: change the ownPort to something predictable. This value is used only internally. 6. Finish the installation process 7. Open .\launcher\conf\wrapper.conf, scroll to bottom In my case, this is C:\BuildAgent3\launcher\conf\wrapper.conf. 8. Change the highlighted values to something unique You need to change the following three values: windows.ntservice.name windows.ntservice.displayname windows.ntservice.description 9. Install the Windows Service for your Build Agent Open a console and navigate to the \bin directory of your Build Agent's installation folder. Then run service.install.bat and afterwards service.start.bat: 10. Verify that your new Build Agent is running Open services.msc and scroll down to TeamCity: 11. Check TeamCity for the new Build Agent It takes a few minutes before the TeamCity service and the Build Agent properly connect - but they do it automatically and the result looks like this: 12. Build stuff! Thanks for visiting - happy coding!

Configure TeamCity to Support Compilation of C# 6 Code using MSBuild

by Oliver 7. September 2017 00:13

For quite a long time, our team chose not to mess with our working TeamCity configurations, which compile, build, test, and deploy our code several times a day. Two weeks ago, we finally upgraded our last and at the same time biggest project discoverize to work with Visual Studio 2017. This allowed us to take a fresh look at the *cough* new C# language features that we have been ignoring for the last few years. But using any of them also meant having to upgrade our continuous integration infrastructure to support them. Here's what we've done. Update all TeamCity configurations If you use the MSBuild runner, now choose the Microsoft Build Tools 2017 as the MSBuild version and set the MSBuild ToolsVersion to 15.0: This will lead to the error that no Build Agents can be found for the given configuration because a requirement is not met: MSBuildTools15.0_x64_Path cannot be found. Install new Build Tools Thanks to this Stackoverflow answer I quickly learned that I had to install the Build Tools for Visual Studio 2017. You can get the web installer from here. More information about the options in the tool can be found on this page. The first screen shows the possible workloads (as of August 2017) with Web development build tools selected… … and the second screen shows the individual components selected (I actually unchecked all optional .NET Framework targeting packs): Restart the TeamCity Agent Service For TeamCity to realize that you've installed new tools on you build machine, you need to restart the Agent Service. You can find it e.g. after running services.msc from the Start menu –> Run command. Missing AllRules.ruleset file Now, the compilation of our C# 6 project finally succeeded. There was still one problem: the build log contained warnings about a AllRules.ruleset file missing. I just went ahead and copied the file from my local machine (including the full folder hierarchy) because I could not find any information on where to find this file other than on my own machine (with Visual Studio installed). After that last step, the build log is finally black again.   Happy configuring!

Using Static Methods from the .NET Framework in MSBuild – a List of All Property Functions

by Oliver 9. June 2016 20:20

In the MSBuild deployment script for our discoverize portals we use a number of useful functions from the .NET framework, e.g.: $([System.IO.File]::Exists($file)) $([System.IO.Path]::GetFileName($(Destination))) $([System.IO.Directory]::GetDirectories("$(Folder)")) $([System.DateTime]::Now.ToString($(TimestampFormat))) These methods are called Property Functions and have been made available for use in MSBuild scripts since version 4. Here's the full list of .NET framework types whose static methods or properties you can use almost anywhere in you MSBuild scripts: System.Byte System.Char System.Convert System.DateTime System.Decimal System.Double System.Enum System.Guid System.Int16 System.Int32 System.Int64 System.IO.Path System.Math System.UInt16 System.UInt32 System.UInt64 System.SByte System.Single System.String System.StringComparer System.TimeSpan System.Text.RegularExpressions.Regex Microsoft.Build.Utilities.ToolLocationHelper There are a few rather useful methods from some more types that you can also use: System.Environment::CommandLine System.Environment::ExpandEnvironmentVariables System.Environment::GetEnvironmentVariable System.Environment::GetEnvironmentVariables System.Environment::GetFolderPath System.Environment::GetLogicalDrives System.IO.Directory::GetDirectories System.IO.Directory::GetFiles System.IO.Directory::GetLastAccessTime System.IO.Directory::GetLastWriteTime System.IO.Directory::GetParent System.IO.File::Exists System.IO.File::GetCreationTime System.IO.File::GetAttributes System.IO.File::GetLastAccessTime System.IO.File::GetLastWriteTime System.IO.File::ReadAllText The general pattern to call a property method is: $([Class]::Method(Parameters)). Beyond the above mentioned methods, MSBuild offers some more helpful ones that are invoked on the MSBuild pseudo class: [MSBuild]::DoesTaskHostExist(string runtime, string arch) [MSBuild]::GetDirectoryNameOfFileAbove(string p, string f) [MSBuild]::GetRegistryValue(...) [MSBuild]::GetRegistryValueFromView(...) [MSBuild]::MakeRelative(string path1, string path2) [MSBuild]::ValueOrDefault(string value, string default) [MSBuild]::Escape(string unescaped) [MSBuild]::Unescape(string escaped) [MSBuild]::Add(double a, double b) [MSBuild]::Add(long a, long b) [MSBuild]::Subtract(double a, double b) [MSBuild]::Subtract(long a, long b) [MSBuild]::Multiply(double a, double b) [MSBuild]::Multiply(long a, long b) [MSBuild]::Divide(double a, double b) [MSBuild]::Divide(long a, long b) [MSBuild]::Modulo(double a, double b) [MSBuild]::Modulo(long a, long b) [MSBuild]::BitwiseOr(int first, int second) [MSBuild]::BitwiseAnd(int first, int second) [MSBuild]::BitwiseXor(int first, int second) [MSBuild]::BitwiseNot(int first) The smart thing about those arithmetic methods is that MSBuild converts string values to a matching number type on the fly, so there's no need for any explicit type conversion. And that would be the whole spectrum of property functions in MSBuild. For further reading, please turn to the official documentation on the MSDN. Happy coding! photo credit: Paddington Reservior Gardens Roof via photopin (license)

Run Web Application Deployment in TeamCity – Only on Weekdays

by Oliver 26. May 2016 22:12

We've experienced in the past that deploying new features before the weekend is not a good idea because potential bugs are not discovered in a timely manner and our reaction times to critical problems are also longer over the weekend than during the week. So for a couple of years now, we've stuck to our no-deployments-on-weekends policy, and every once in a while an exceptional deployment with either some "very important" new feature or "only small changes, nothing big" reminded us that it was really a good idea to deploy only on weekdays. There was one downside to this: we always missed out on getting fresh bits on our servers on Monday morning because someone had to trigger the deployment by doing a push to a dedicated Git repository before our TeamCity deployment configuration started to run at 2:20 am. Most of us usually don't work on Sundays so there usually was no-one to do that. Using a CRON expression in your trigger It turns out, TeamCity supports date/time triggers defined by a CRON expression: Since I don't use cron on a daily basis and don't speak cron fluently, I was glad to stumble upon Cron Maker which was a great help with getting the syntax right: Having a tool like this in my tool chain makes me confident and I won't avoid using CRON or its powerful expressions in the future! Happy cron'ing!

Update (or Delete) an entry from appSettings in web.config using MSBuild and the XmlUpdate task

by Oliver 11. May 2016 12:24

How do I modify an entry in the appSettings node during deployment? We use a custom MSBuild script to deploy our discoverize portals and wanted to set a portal dependent key, namely NewRelic.AppName. So how did we do it? We use the XmlUpdate task from the MSBuild Community Tasks project. Here's the code: <!-- Import all MSBuild Community Tasks --> <Import Project="$(LibFolder)\msbuild\MSBuild.Community.Tasks.Targets" />   <!-- Define the property value we want to use below --> <PropertyGroup>   <NewRelicAppName>$([System.IO.Path]::GetFileName($(Destination))), Discoverize Portals</NewRelicAppName> </PropertyGroup>   <!-- Update NewRelic.AppName key in appSettings of web.config --> <XmlUpdate XmlFileName="$(Destination)\web.config"   XPath="/configuration/appSettings/add[@key='NewRelic.AppName']/@value"   Value="$(NewRelicAppName)" /> I'm no XPath guru, so I was glad to find this thread that contained the XPath expression needed to select a certain appSettings entry by key and update its value. If you seek to delete an element from an XML file, this stackoverflow answer taught me that there is also a Delete attribute on the XmlUpdate task :-) Happy deploying!

How to Quickly Set Up a Private WebPageTest Server And Client

by Oliver 14. November 2015 21:40

Get your own WebPageTest server and test agent up and running in minutes, not hours! Motivation The original documentation can be found here and here. Unfortunately, it's a bit vague in some parts, especially if you don't set up infrastructural pieces and cloud server instances on a daily basis. So here's a how-to guide to get you up and running as fast as possible. Infrastructure overview To run web page tests against your own private instance, we need: a WebPageTest server instance (the master) one ore more WebPageTest test agents (the clients) The master receives test jobs and delegates them to one of the clients. You can run tests from the web interface or using the API through the webpagetest node module. You might want to think about where in the world you want to spin up those virtual machines. The WPT server (master) can really be hosted anywhere you want but the test agents (client) location should be chosen conciously because their distance to the tested site's server will play a role in the results you will see later during testing. How to set up the master (WPT server) You need an Amazon AWS account to set this up quickly. If you haven't got one, you either quit here and set up your own server with the WebPageTest stack, or you go and create one. Now, go to your AWS dashboard, to Instances –> Instances, and click "Launch Instance": On the next screen, go to Community AMIs, enter one of the ids that can be found here – I chose ami-22cefd3f (eu-central-1) – and hit "Select": In step 2, you can choose a t2.micro instance. The WPT server does not need to be high performance – it only delegates test execution and gathers the results. It's when setting up the client (test agent), that we'll have to pay attention to the performance of the instance. Now, you keep clicking Next until you reach "Step 6. Configure Security Group". Here we need to add a firewall rule that allows us to access our WPT server (master) through HTTP, otherwise no testing will be possible. Giving the security group a more descriptive name and description (❸) is optional but nice: In step 7, review your settings if you want, then hit "Launch": As highlighted in the screen above, AWS will now want to assign a (ssh) key pair to this instance. In case you have an existing key pair you can re-use that. In case you're doing this for the first time, you won't have any existing key pairs to choose from and will have to create a new one. The "Launch Instances" button will activate only after you've downloaded your private key (❸): Clicking ❷ gets you to the Instances overview that was empty at the beginning where you'll find the public IP address and DNS entry of your instance: Congratulations, you've successfully completed the setup of the WPT server (master)! If you now open http://your.instance.ip you should see the WebPageTest UI: To log into your instance via SSH follow one of the guides here. In short: Either use ssh from the command line, available on all linuxes and even on Windows if you have Git installed, e.g. in C:\Program Files\Git\usr\bin: ssh -i wpt-server.pem ubuntu@[public-ip|public-dns] Or, on Windows, use PuTTY. In this case you'll first have to generate a PuTTY compatible private key file from your *.pem file and then you can connect through PuTTy. Here's how to do that. How to set up the client (WPT test agent) Now, we need to set up at least one test agent to actually execute some tests. There's a long list of pre-configured, regularly updated Windows AMIs with all software installed that's needed to execute tests in the documentation. To get started quickly, pick one that contains all major browsers and is located in your favorite region. In this guide, we're going to use ami-54291f49 (IE11/Chrome/Firefox/Safari) in region "eu-central (Frankfurt)". Basically, we repeat the steps from the master setup, but now using the test agent AMI. In step 2, when choosing an Instance Type, we'll now have to ensure that our agent will deliver consistent results. This performance review recommends the following choices (prices will vary by region, the ones displayed here were for US East N. Virginia), quoting: If you’re running just a couple tests per hour, on small HTTP sites, a t2.micro will be okay ($13/month) If you’re running just a couple tests per hour, on large or secure sites, you’ll need to use a t2.medium ($52/month) If you’re running lots of tests per hour, you can’t use t2’s –  the most efficient agent will be a c3.large ($135/month) In step 3, we have to configure our test agent with the following information: where to find the WPT server (master): use the public IP address or DNS name what's the location (name) of this agent: a string used in the locations.ini of the master To be honest, I haven't quite wrapped my head around the auto-scaling feature of WPT. That's why we set up a single location ("first") manually that this client will be identified with. In the user data field under Advanced Details we enter: wpt_server=52.29.your.ip   wpt_location=first Now, either click your way through the remaining steps or jump to "Review and Launch" and launch your test agent instance. The key pair dialog will pop up again, and now you can choose your existing key "wpt-server" to assign to that instance. You won't use it to connect to it, anyway, because the default connection type to a Windows instance is RDP for which a firewall rule was automatically added in step 6. After launching, a link will be available with instructions on how to connect to that Windows instance, but you shouldn't need to do that. Connecting master and client One step is left: we have to configure the master to know which test agents it can use. This part was actually one of the most tedious bits in the setup because juggling several configuration files with lots of options and entries to make them do what you want them to do is never easy. For the manual management of test agents we need to do the following: Log into the master, e.g. ssh -i wpt-server.pem ubuntu@pu.bl.ic.ip Go to the folder /var/www/webpagetest/www/settings/ Edit locations.ini to contain these blocks (sudo nano locations.ini): [locations] 1=first default=first [first] 1=first_wptdriver 2=first_ie label="My first agent" default=first_wptdriver [first_wptdriver] browser=Chrome,Firefox,Safari [first_ie] browser=IE 11 In settings.ini, at the very end, set ec2_locations=0 to hide the predefined EC2 locations from the location dropdown in the browser. Restart NGNIX: sudo service nginx restart Now, if you go to http://your.public.ip again, you'll see "My first agent" in the location dropdown and "Chrome", "Firefox", and "Safari (Windows)" in the browser dropdown. I didn't try to find out how to show "IE 11" as well, but at this moment I didn't care. (You might have to wait a few moments before the location lists update after the NGINX restart.) You can now run your first test! After some 10-15 seconds you should see this screen: And a few moments later the first results should show. Congratulations! Automating tests via the WebPageTest API If you've tried to run WebPageTests in an automated way, you'll without a doubt have found the webpagetest node module. With your private server and test agent set up, you'll now need to dispatch your tests like so: webpagetest test http://my.example.com ↵ --server http://<master_ip> ↵ --key <api_key> ↵ --location first_wptdriver:Chrome The location argument refers to the definitions in the locations.ini file. The api key can be found in the keys.ini file on the master: We run our test from within TeamCity using a custom script, but that's a topic for another post! Happy WebPageTesting!

How to Convert an XSLX File to CSV with UTF-8 Encoding Using LibreOffice / OpenOffice

by Oliver 13. November 2015 21:54

Thanks to this stackoverflow answer I stumbled upon the comment by Aryeh Leib Taurog who shared his solution and a link to the OpenOffice documentation on the available options for the CSV filter. Here's how to convert the file input.xlsx to the UTF-8 encoded file input.csv in the current directory, with semicolons as field delimiter: soffice.exe --convert-to "csv:Text - txt - csv (StarCalc):59,,76,1" input.xlsx On my Windows system the soffice.exe is located under C:\Program Files\LibreOffice 5\program. Hre's the explanation of the cryptic filter arguments: csv – the extension of the output file Text - txt - csv (StarCalc) – the (ancient) name of the filter (kept for compatibility) 59,,76,1 – these are four arguments: the first parameter is the delimiter in the output file – 59 is the ASCII code for ';' the second parameter is the text delimiter – it's missing because I don't want to wrap text in quotes the third parameter is the file encoding – 76 is the internal OpenOffice code for UTF-8 (from the table on the documentation page) the fourth parameter defines the line number with which to start the export – here, we start with line 1 Thank you, open source community, and happy converting! PS: For non-windows users, Gnumeric with its command line tool ssconvert might be a good choice for this job, as well.

Setting up freeSSHd to Connect to its SFTP Server Using SSH Public Key Authentication

by Oliver 14. January 2015 22:59

A while ago I've set up an SFTP server using the freeware freeSSHd which is relatively easy to get up and running. Initially, I created a user/password pair to log into the server. Using SSH Last week, we decided to switch to public/private SSH keys for authentication instead of the user/password pair. Among other things, this allows us to script access to our server while at the same time we can avoid keeping a clear text password in one of our scripts. Here's how we've set it up. Configuring freeSSHd for use with SSH I'll run you through the necessary steps: Open an instance of freeSSHd and go to the Users tab. Add or Change a login to use Public Key (SSH only) authorization and enable SFTP access: Navigate to the Authentication tab. There you'll find the path to the folder in which to deposit your public keys. If you plan to have more than a few, consider using a subfolder of the default one: Open the public key folder in Windows Explorer and create a new empty text file there by the name of the login you've set up in step 1. Make sure the file name is exactly the same as the name of the user and don't add any file extension to it. This is where we'll be pasting a new SSH public key to in a moment: Now we will generate an SSH key pair. Locate puttygen.exe on your PC. You can grab it from the PuTTY download page, but it also comes bundled with GitExtensions, or WinSCP, if you use one of these: [Side note: I use Everything to find such files. It's a great search tool that delivers instant results.] Start puttygen.exe and generate a pair of SSH keys by clicking Generate ❶: Next, copy the public key from the grey text box ❷ and paste it into the empty file you've created in step 3. In my case, this file is called "oliver". You can now save the private key ❸ to a file of your choice, optionally protected by a passphrase, to use it to connect to freeSSHd via SSH using your preferred tool. I've successfully used WinSCP for testing, as I've experienced several problems using PuTTY's psftp.exe command line tool. Roundup Setting up public key authentication in freeSSHd can be tricky. While researching the solution I've stumbled over this blog post addressing the same problem. Its author refers to this setup guide from IBM (pdf) as the source of help so it might be helpful to others out there, as well. I hope that my step-by-step guide has also helped you. Happy connecting!

ASP.NET vNEXT, Docker, and the Future of Application Development and Deployment

by Oliver 3. November 2014 09:16

It's been an impressive year so far in the realms of software development and deployment, especially with ASP.NET vNEXT enabling per-application bundling of not only the .NET runtime but even the CLR needed for your app Docker standardizing the software delivery process by use of Linux containers (runs on Windows in a VM), (here's A Docker ‘Hello World' With Mono) and now Microsoft announcing native Docker Support for Windows Server Now, it took me a while to understand that we're witnesses of nothing less than a revolution in software development. The Vision: Build Your App Anywhere, Bundle It, and Run It Anywhere (Else) The clouds have been with us for a couple of years now and have started to provide real benefit beyond "moving your stuff to somewhere else". What's emerging now, with Docker and also the new ASP.NET runtime bundling, is something completely new: Application Containers. They don't have either specific OS requirements – Docker will be supported natively on Windows Server soon, ASP.NET runs on Linux today – nor need they a specific technology stack installed on the target machine (as with PaaS) because they bring all of the necessary runtime along. But they're also not large VMs bundled with your application, which carry a significant maintenance overhead (when using IaaS). Virtualized application containers are the sweet spot between IaaS and PaaS. Go ahead and read that post – it's eye-opening.

What's Wrong With Our Specification By Example Tests

by Oliver 25. July 2013 11:44

We've been working on our customizable portal software discoverize for about two years now using Orchard CMS. From the beginning we were convinced to use Specification By Example to build up a live documentation of the functionality of our software. This has been very important to us since we plan to drive tens if not hundreds of portals using the same code base. Last year, I've already written about how we do our integration testing. I've also written about why we do browser based testing as opposed to some lower level testing that is in place e.g. in the Orchard.Specs project inside the Orchard source solution. But here we are, a year has passed, and we're still not happy with our approach. Problems we're facing The biggest problem is really that to write an acceptance test for a new feature takes nearly as much time as it takes for the feature to implement. This might be tolerable for mission critical software used in banks or space shuttles, but it's just over the top for a consumer website. On the other hand, we want an insurance that the software we ship contains as few bugs as possible. Development speed down by 50% The websites we generate using our software are quite complex and interactive. This has repeatedly posed challenges on writing robust browser based tests. We chose Coypu over Selenium because it has a cleaner API and handles asynchronous postbacks really well, but its API has still been limiting to us so we regularly find ourselves hacking around those limitations. All of which has to be tested, of course, which takes a noticeable amount of time. Another problem we face is that we need to change our HTML to accommodate for testing. For example, we keep adding id attributes to elements just so we have an easy and reliable way of accessing those elements in our specification tests. This seems not right but that's how we get stuff to work. Test Execution speed too high for continuous feedback The spec test execution time is too high. We're talking about 50-90 seconds per test case if they pass, add another 20-30 secs if they fail (because of the browser automation timeouts). That's a real bummer because it's so easy to loose focus during that time. Additionally, before executing a test suite (or a single test, if that's what you want) we compile and publish our code to a separate destination which the specs run on. This process takes another 45-50 secs which is ok if you run all specs at once but adds significant overhead when working on a single acceptance test. Related to this, we keep having trouble in quickly finding the cause for a broken test because not all of our commits are being pushed through the acceptance tests pipeline since a single run sometimes takes longer than the time between commits. This makes finding the cause for a breaking test harder. Test Fragility keeps us busy Another reoccurring problem are breaking tests due to UI changes. This might be a simply change of CSS, HTML or a JavaScript snippet, but it happens all the time. Also, there are usually at least a couple of tests that break simultaneously because they reuse certain steps which is not only annoying but often misleading as to where the error really comes from. Demoralization All of the above lead to decreases morale both in writing new tests and in fixing broken ones. Which in turn adds even more overhead to the development process. Looking for success stories This post came into existence because we believe in Specification by Example and we also believe that other teams are successfully running integration tests, even by the use of an automated browser. If you're part of such a team, or have any other valuable feedback to share, please do so in the comments. Happy testing!

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.