17. November 2015 23:11
14. November 2015 21:40
Get your own WebPageTest server and test agent up and running in minutes, not hours! Motivation The original documentation can be found here and here. Unfortunately, it's a bit vague in some parts, especially if you don't set up infrastructural pieces and cloud server instances on a daily basis. So here's a how-to guide to get you up and running as fast as possible. Infrastructure overview To run web page tests against your own private instance, we need: a WebPageTest server instance (the master) one ore more WebPageTest test agents (the clients) The master receives test jobs and delegates them to one of the clients. You can run tests from the web interface or using the API through the webpagetest node module. You might want to think about where in the world you want to spin up those virtual machines. The WPT server (master) can really be hosted anywhere you want but the test agents (client) location should be chosen conciously because their distance to the tested site's server will play a role in the results you will see later during testing. How to set up the master (WPT server) You need an Amazon AWS account to set this up quickly. If you haven't got one, you either quit here and set up your own server with the WebPageTest stack, or you go and create one. Now, go to your AWS dashboard, to Instances –> Instances, and click "Launch Instance": On the next screen, go to Community AMIs, enter one of the ids that can be found here – I chose ami-22cefd3f (eu-central-1) – and hit "Select": In step 2, you can choose a t2.micro instance. The WPT server does not need to be high performance – it only delegates test execution and gathers the results. It's when setting up the client (test agent), that we'll have to pay attention to the performance of the instance. Now, you keep clicking Next until you reach "Step 6. Configure Security Group". Here we need to add a firewall rule that allows us to access our WPT server (master) through HTTP, otherwise no testing will be possible. Giving the security group a more descriptive name and description (❸) is optional but nice: In step 7, review your settings if you want, then hit "Launch": As highlighted in the screen above, AWS will now want to assign a (ssh) key pair to this instance. In case you have an existing key pair you can re-use that. In case you're doing this for the first time, you won't have any existing key pairs to choose from and will have to create a new one. The "Launch Instances" button will activate only after you've downloaded your private key (❸): Clicking ❷ gets you to the Instances overview that was empty at the beginning where you'll find the public IP address and DNS entry of your instance: Congratulations, you've successfully completed the setup of the WPT server (master)! If you now open http://your.instance.ip you should see the WebPageTest UI: To log into your instance via SSH follow one of the guides here. In short: Either use ssh from the command line, available on all linuxes and even on Windows if you have Git installed, e.g. in C:\Program Files\Git\usr\bin: ssh -i wpt-server.pem ubuntu@[public-ip|public-dns] Or, on Windows, use PuTTY. In this case you'll first have to generate a PuTTY compatible private key file from your *.pem file and then you can connect through PuTTy. Here's how to do that. How to set up the client (WPT test agent) Now, we need to set up at least one test agent to actually execute some tests. There's a long list of pre-configured, regularly updated Windows AMIs with all software installed that's needed to execute tests in the documentation. To get started quickly, pick one that contains all major browsers and is located in your favorite region. In this guide, we're going to use ami-54291f49 (IE11/Chrome/Firefox/Safari) in region "eu-central (Frankfurt)". Basically, we repeat the steps from the master setup, but now using the test agent AMI. In step 2, when choosing an Instance Type, we'll now have to ensure that our agent will deliver consistent results. This performance review recommends the following choices (prices will vary by region, the ones displayed here were for US East N. Virginia), quoting: If you’re running just a couple tests per hour, on small HTTP sites, a t2.micro will be okay ($13/month) If you’re running just a couple tests per hour, on large or secure sites, you’ll need to use a t2.medium ($52/month) If you’re running lots of tests per hour, you can’t use t2’s – the most efficient agent will be a c3.large ($135/month) In step 3, we have to configure our test agent with the following information: where to find the WPT server (master): use the public IP address or DNS name what's the location (name) of this agent: a string used in the locations.ini of the master To be honest, I haven't quite wrapped my head around the auto-scaling feature of WPT. That's why we set up a single location ("first") manually that this client will be identified with. In the user data field under Advanced Details we enter: wpt_server=52.29.your.ip wpt_location=first Now, either click your way through the remaining steps or jump to "Review and Launch" and launch your test agent instance. The key pair dialog will pop up again, and now you can choose your existing key "wpt-server" to assign to that instance. You won't use it to connect to it, anyway, because the default connection type to a Windows instance is RDP for which a firewall rule was automatically added in step 6. After launching, a link will be available with instructions on how to connect to that Windows instance, but you shouldn't need to do that. Connecting master and client One step is left: we have to configure the master to know which test agents it can use. This part was actually one of the most tedious bits in the setup because juggling several configuration files with lots of options and entries to make them do what you want them to do is never easy. For the manual management of test agents we need to do the following: Log into the master, e.g. ssh -i wpt-server.pem firstname.lastname@example.org Go to the folder /var/www/webpagetest/www/settings/ Edit locations.ini to contain these blocks (sudo nano locations.ini): [locations]
label="My first agent"
In settings.ini, at the very end, set ec2_locations=0 to hide the predefined EC2 locations from the location dropdown in the browser.
Restart NGNIX: sudo service nginx restart
Now, if you go to http://your.public.ip again, you'll see "My first agent" in the location dropdown and "Chrome", "Firefox", and "Safari (Windows)" in the browser dropdown. I didn't try to find out how to show "IE 11" as well, but at this moment I didn't care. (You might have to wait a few moments before the location lists update after the NGINX restart.)
You can now run your first test!
After some 10-15 seconds you should see this screen:
And a few moments later the first results should show. Congratulations!
Automating tests via the WebPageTest API
If you've tried to run WebPageTests in an automated way, you'll without a doubt have found the webpagetest node module. With your private server and test agent set up, you'll now need to dispatch your tests like so:
webpagetest test http://my.example.com ↵
--server http://<master_ip> ↵
--key <api_key> ↵
The location argument refers to the definitions in the locations.ini file. The api key can be found in the keys.ini file on the master:
We run our test from within TeamCity using a custom script, but that's a topic for another post!
6. November 2015 21:32
25. April 2015 22:16
Two days ago I finally did it: I asked a question on serverfault.com looking for advice on why our brand new server performs more poorly than our two older servers. All the hardware details speak in favor of the new server: CPU: Core i7-4770 @ 3.4 GHz vs. Xeon E3-1230 @ 3.2 GHz RAM: 32 GB vs. 16 GB Drives: 2x SSD vs. 2x SATA But in reality, the older servers with the lower spec outperformed the new server by almost a factor of two! That is to say, for every 1 Request/sec processed the new server needed 4.5 % Processor time compared to 2.6 % on the old server. Here's a PerfMon screenshot of the new server: New CPUs are really good at saving energy… … actually so good, that they will rarely bother to hurry up until you really, really stress them out. Here's a good read by Brent Ozar on an energy serving CPU that would cause certain SQL queries to run two times slower on newer hardware than on the old one! That's exactly what's been happening to us. Power Plan: From Balanced to High Performance That brings us to: Power Plans. Windows Server and Client OSes come installed with several Power Plans, and it just so happened that the new server we had ordered with Windows Server 2012 R2 installed had its Power Plan set to Balanced (Recommended). Well, that might be a good choice for the server hoster as it helps keep the electricity bills down but it's absolutely not a good choice if you want your applications to perform well on that server. They will simply be a lot slower than the could be. So, open the Power Options window by typing "Power Plan" into the start menu or Windows search and check the High Performance radio button: After doins so on that new server, PerfMon would show this much more soothing picture: Now, we have only 1,5% Processing Time per 1 Request/sec processed. That's an improvement of factor 3. Nice! Lesson Learned I've learned that I'm not that good of a sys admin, yet. I had been contemplating on the reasons of the poor performance of that new server of ours again and again, I had checked all kinds of settings inside IIS, ASP.NET, and the like. Those are the areas I work in day-to-day. Turns out, I needed to widen my horizon. Thanks to serverfault.com I did. And our server is at last performing as it should. Happy administrating!
28. May 2014 12:09
Recently, we had to make some space available in one of our SQL Express instances that was getting close to its 10 GB limit of stored data, so I set out to delete some old data from two of our largest tables. One contained about half a million rows, the other a bit over 21 million. Simple Deletion Would Take… Forever The simplest sql statement to delete all rows that were created before 2012 would be the following: DELETE FROM [dbo].[Message] WHERE DateCreated < '20120101' I can't even tell you how long this took because at 14 minutes I just cancelled the query execution (which took another 7 minutes to finish). This was the table with less than 500,000 rows where we wanted to delete a bit more than 200,000 rows. Break Delete Operation Into Chunks Searching for a solution to the problem, I came across this blog post on breaking large delete operations into chunks. It shows in good detail how the simple version above behaves against running a loop of a few tens of thousand deletes per iteration. An interesting aspect I hadn't thought of at that point was the transaction log growth that can become a problem with large delete operations. Running a loop allows you to do a log backup (in full recovery mode) or a checkpoint (in simple mode) at the end of each iteration so that the log will grow much more slowly. Unfortunately, though, this didn't help with the execution time of the delete itself, as you can also see from the graphs presented in above post. Disable Those Indexes! It turns out, our [Message] table had six non-clustered indexes on them which all had to be written to for every row that was deleted. Even if those operations are fast, their processing time will add up over a few hundred thousand iterations. So let's turn them off! In fact, let's turn only those off that won't be used during out delete query. [We have one index on the DateCreated column which will be helpful during execution.] This stackoverflow answer shows how to create some dynamic SQL to disable all non-clustered indexex in a database. I've modified it slightly to disable only indexes of a given table: Disable/Enable Table Indexes DECLARE @table AS VARCHAR(MAX) = 'Message'; DECLARE @sqlDisable AS VARCHAR(MAX) = ''; DECLARE @sqlEnable AS VARCHAR(MAX) = ''; SELECT @sqlDisable = @sqlDisable + 'ALTER INDEX ' + idx.name + ' ON ' + obj.name + ' DISABLE;' + CHAR(13) + CHAR(10), @sqlEnable = @sqlEnable + 'ALTER INDEX ' + idx.name + ' ON ' + obj.name + ' REBUILD;' + CHAR(13) + CHAR(10) FROM sys.indexes idx JOIN sys.objects obj ON idx.object_id = obj.object_id WHERE idx.type_desc = 'NONCLUSTERED' AND obj.type_desc = 'USER_TABLE' AND obj.name = @table; RAISERROR(@sqlDisable, 0, 1) WITH NOWAIT; RAISERROR(@sqlEnable, 0, 1) WITH NOWAIT; --EXEC(@sqlDisable); --EXEC(@sqlEnable); Now, with those indexes disabled, the simple delete operation took a lot less than a minute! Even in the case of our 21 million rows table, deleting 7 million rows took only 1:02 on my machine. Of course, after deleting the unwanted rows, you need to re-enable the indexes again which took another minute, but all in all I'm happy with the result. Copy Data to New Table and Drop Old Table One other way of deleting rows that I've used in combination with changing the table schema at the same time is the following: use a temporary table into which you copy all the rows you want to keep (the schema of which I modified to meet our new needs) delete the original table rename the temporary table to the original table's name recreate all indexes you had defined before This is basically what SSMS generates for you when you change the schema of a table, except for the indexes – you have to recreate them yourself. As you can imagine, this approach becomes faster and creates smaller transaction log footprint with a growing amount of data to delete. It won't have any benefit if you delete less than half of the table's rows. Choose the right tool for the job There are quite a few other approaches and tips out there on how to speed up your deletion process. It depends a lot on your concrete situation which of those will actually help you get your deletion job done faster. I had to experiment quite a bit to find the sweet spot but now that I've seen a few approaches I'm able to take a better decision in the future.
17. June 2013 15:51
17. June 2013 13:59
This is just a short post to draw your attention to a sweet tool I've just discovered: PNGGauntlet. It runs on Windows using the .NET 4 framework and is as easy to use as you could possibly wish. Also: it's completely free to use. Convert Your Existing PNGs For starters, we'll just convert some existing PNGs – can't really do any harm with that. In the Open File dialog, there's an option to filter for only .png files. You can choose many of them at once: If you provide an Output directory, the optimized files will be written to that destination. But: the tool also has the option to overwrite the original files, which is awesome if you use some kind of source control (and thus have a backup) and just want to get the job done. During my first run, using the 8 processing threads my CPU has to offer, … … I got savings from 3% to 27%: PNGGauntlet also tells me, that in total I saved 4,52 KB. If those were all images on your web site, that would be a not so bad improvement, especially when you get it investing about 2 min of your time and no extra expenses! Real Savings Running PNGGauntlet on the sprites that we use for Camping.Info, we were really surprised: out of 172 KB it saved us over 31%, a whole 54 KB! Now that's an improvement that on a slightly slower connection will already be noticeable. We'll definitely check the rest of our images for more savings. Convert Other Image Formats You can also choose to change your images format to PNG if you're in the mood. I tried converting all GIFs in the Orchard CMS Admin theme to PNGs and went from a total of 24 KB for 20 files to less than 17 KB with no loss of quality – an over 30% saving! Just beware that you'll need to change the file references in your project to pick up the new PNGs. Roundup Easy, fast and cheap (as in free) image optimization doesn't have to be magic anymore – today anyone can do it. Check out PNGGauntlet to try for yourself. There's really no excuse not to!