by Oliver
31. October 2017 09:00
What are PWAs or Progressive Web Apps? According to Jeff Burtoft (@github) they are: A development approach using a set of technologies that allows web content to deliver app-like experiences, including offline functionality, notifications, and device access. What's special about PWAs? They are progressive – i.e. they work everywhere but get better with better devices They don't need an app store – they are just another (enhanced) web resource Try before you buy, install, uninstall – it's all fast and easy What does a PWA consist of? it needs to be served over a secure protocol, e.g. https it needs to have an app manifest, so user agents know about its requirements and can give access to desired features the manifest is just another resource on the web search engines and app stores can index and ingest them the other big part of a PWA is the Service Worker What is a Service Worker? In the MDN web docs we find this: Service workers essentially act as proxy servers that sit between web applications, and the browser and network (when available). We also find a few words on the roles which Service Workers are designed to fulfil: They are intended to (amongst other things) enable the creation of effective offline experiences, intercepting network requests and taking appropriate action based on whether the network is available and updated assets reside on the server. They will also allow access to push notifications and background sync APIs. [highlights are my own] Device Support for PWAs The browser compatibility for Service Workers is still pretty weak: Internet Explorer will never have it, Edge is currently implementing its support, and development in Safari WebKit has really just begun – while Chrome, Firefox, and Opera support pretty much the whole API already: What this means is that iOS users are currently completely cut off from the newest and hottest that's happening in the web landscape, while Windows users will soon be able to start using PWAs natively through UWP apps – more details on that e.g. here. In the meantime, Android users can already take full advantage of PWAs today thanks to the advanced implementation present in Chrome. Resources to get started with PWAs Extensive documentation: the Service Worker API on the MDN The Service Worker Cookbook by Mozilla: https://serviceworke.rs/ Adding a Service Worker to your site the easy way: Google Workbox Create a Progressive Web App in your browser: PWA Builder by Microsoft (was ManifoldJS) Auditing tool for PWAs (among others): Google Lighthouse – which is actually built into Chrome now under the Audits tab in the Chrome DevTools (great stuff!) Happy coding!
by Oliver
25. October 2017 10:37
A colleague of mine and I attended the .NET Developer Days conference this year. It was my third time participating; he was there for the first time. Here are links to the complete agenda and the pre-con workshops. My personal conference schedule Wednesday, October 18th Programming ASP.NET MVC Core (abstract) Dino Esposito @github Thursday, October 19th Surviving Microservices (abstract) Michele Leroux Bustamante – Opening Keynote Build Web Apps the “Progressive” Way (300) (abstract) Jeff Burtoft @github Async/Await and the Task Parallel Library: await headexplosion (400) (abstract) Daniel Marbach @github Setting up a CI/CD pipeline for a Containerized Project in VSTS (200) (abstract) Maciej Misztal – Sponsor Session Adding History to CRUD (400) (abstract) Dino Esposito Software Architecture That Every Developer Should Know (300) (abstract) Alon Fliess Building for the Future without Abandoning the Past (200) (abstract) Jeff Burtoft Friday, October 20th Performance that pays off (300) (abstract) Szymon Kulec @github The Performance Investigator’s Field Guide (300) (abstract) Sasha Goldshtein Building Evolutionary Architectures (300) (abstract) Neal Ford Securing .NET applications in Azure (300) (abstract) Sebastian Solnica – Sponsor Session How I Built An Open-Source Debugger (300) (abstract) Sasha Goldshtein Stories Every Developer Should Know (abstract) Neal Ford – Closing Keynote Random notes about the conference Predominant Topic Microservices are everywhere – this is my take on it Best Session The Performance Investigator's Field Guide – here I've shared my impressions Catering Inter Bankiet delivered fantastic food and drinks, including lots of good coffee and sandwiches Event Venue EXPO XXI, Warsaw – a good place for the conference, a few walking minutes from Warszawa Zachodnia Summary The 2017 edition of the .NET Developer Days was a success. I still have to process my notes and all the input I've gathered there. I will update my personal conference schedule with links to my own digest posts of the sessions where it makes sense. If you want to attend the 2018 edition, you will be able to catch a super early bird ticket from the beginning of December! Happy conferencing!
by Oliver
8. September 2017 00:04
To set up API access to a site's Search Console, we first need to set up and configure a project in the Google Developer Console. Afterwards, we will connect that project to the Search Console to retrieve data from there using our own application. In the Google Developer Console Set up a project in your Google Developer Console (or reuse an existing project). Go to the Service Accounts page and click [+] CREATE SERVICE ACCOUNT:Check "Furnish a new private key" to retrieve a JSON file with credentials that can be used from your application to impersonate that account. Save the file that is being offered as a download. Its name has the form <ProjectId>-<Hash>.json, e.g. discoverize-b134aef29b12.json. This file is confedential – anyone with access to it can do what your Service Account is allowed to! So treat it respectively. Go to the project dashboard and click [+] ENABLE APIS AND SERVICES: Click on the Google Search Console API link and on the next page click ENABLE. That's what had to be done in the Google Developer Console. Now we'll turn to the Google Search Console aka Webmaster Tools. In the Google Search Console Make sure you have a property set up in the Search Console for the target site. Let's assume you're targeting https://glamping.info. The protocol is of importance: http://glamping.info and https://glamping.info will be two different properties. In the settings drop-down menu, go to Users and Property Owners: Click ADD A NEW USER (do not Manage Property Owners):Add the email address of the Service Account you've created in Google Developer Console and give it Full permission.Note: do NOT add the Service Account as a Property Owner before adding it as a user, otherwise API access using it will NOT work. This is a bug – see this stackoverflow answer for more details. When you click Add, you'll see the new user on the Users and Property Owners page: Now, you're all set up to access your site's Search Console using the Google API. Happy coding!
by Oliver
7. September 2017 00:13
For quite a long time, our team chose not to mess with our working TeamCity configurations, which compile, build, test, and deploy our code several times a day. Two weeks ago, we finally upgraded our last and at the same time biggest project discoverize to work with Visual Studio 2017. This allowed us to take a fresh look at the *cough* new C# language features that we have been ignoring for the last few years. But using any of them also meant having to upgrade our continuous integration infrastructure to support them. Here's what we've done. Update all TeamCity configurations If you use the MSBuild runner, now choose the Microsoft Build Tools 2017 as the MSBuild version and set the MSBuild ToolsVersion to 15.0: This will lead to the error that no Build Agents can be found for the given configuration because a requirement is not met: MSBuildTools15.0_x64_Path cannot be found. Install new Build Tools Thanks to this Stackoverflow answer I quickly learned that I had to install the Build Tools for Visual Studio 2017. You can get the web installer from here. More information about the options in the tool can be found on this page. The first screen shows the possible workloads (as of August 2017) with Web development build tools selected… … and the second screen shows the individual components selected (I actually unchecked all optional .NET Framework targeting packs): Restart the TeamCity Agent Service For TeamCity to realize that you've installed new tools on you build machine, you need to restart the Agent Service. You can find it e.g. after running services.msc from the Start menu –> Run command. Missing AllRules.ruleset file Now, the compilation of our C# 6 project finally succeeded. There was still one problem: the build log contained warnings about a AllRules.ruleset file missing. I just went ahead and copied the file from my local machine (including the full folder hierarchy) because I could not find any information on where to find this file other than on my own machine (with Visual Studio installed). After that last step, the build log is finally black again. Happy configuring!
by Oliver
26. May 2016 22:12
We've experienced in the past that deploying new features before the weekend is not a good idea because potential bugs are not discovered in a timely manner and our reaction times to critical problems are also longer over the weekend than during the week. So for a couple of years now, we've stuck to our no-deployments-on-weekends policy, and every once in a while an exceptional deployment with either some "very important" new feature or "only small changes, nothing big" reminded us that it was really a good idea to deploy only on weekdays. There was one downside to this: we always missed out on getting fresh bits on our servers on Monday morning because someone had to trigger the deployment by doing a push to a dedicated Git repository before our TeamCity deployment configuration started to run at 2:20 am. Most of us usually don't work on Sundays so there usually was no-one to do that. Using a CRON expression in your trigger It turns out, TeamCity supports date/time triggers defined by a CRON expression: Since I don't use cron on a daily basis and don't speak cron fluently, I was glad to stumble upon Cron Maker which was a great help with getting the syntax right: Having a tool like this in my tool chain makes me confident and I won't avoid using CRON or its powerful expressions in the future! Happy cron'ing!
by Oliver
23. May 2016 09:43
As of today, 20 May 2016, Ionic 2… is still in Beta status, the last update being beta 6 on 25 April 2016 sports the Ionic View app which allows to rapidly publish new versions of your Ionic or even Cordova app to Android and iOS devices [blog post here, 6 April 2016] is not yet supported as target by Creator, a drag-&-drop prototyping tool for Ionic apps [blog post here, 31 March 2016] has Windows Universal Platform App support, besides supporting Android and iOS [blog post here, 29 March 2016] can be used with Angular 2 but does not necessarily have to be prefers TypeScript as development language and is itself written in TypeScript A few valuable resources I stumbled upon while reading and following links: 260+ Ionic Framework Resources 60+ Ionic Framework 2 Resources 5 min Quickstart with Angular 2 (official site) Now, go and have fun with Ionic 2!
by Oliver
14. November 2015 21:40
Get your own WebPageTest server and test agent up and running in minutes, not hours! Motivation The original documentation can be found here and here. Unfortunately, it's a bit vague in some parts, especially if you don't set up infrastructural pieces and cloud server instances on a daily basis. So here's a how-to guide to get you up and running as fast as possible. Infrastructure overview To run web page tests against your own private instance, we need: a WebPageTest server instance (the master) one ore more WebPageTest test agents (the clients) The master receives test jobs and delegates them to one of the clients. You can run tests from the web interface or using the API through the webpagetest node module. You might want to think about where in the world you want to spin up those virtual machines. The WPT server (master) can really be hosted anywhere you want but the test agents (client) location should be chosen conciously because their distance to the tested site's server will play a role in the results you will see later during testing. How to set up the master (WPT server) You need an Amazon AWS account to set this up quickly. If you haven't got one, you either quit here and set up your own server with the WebPageTest stack, or you go and create one. Now, go to your AWS dashboard, to Instances –> Instances, and click "Launch Instance": On the next screen, go to Community AMIs, enter one of the ids that can be found here – I chose ami-22cefd3f (eu-central-1) – and hit "Select": In step 2, you can choose a t2.micro instance. The WPT server does not need to be high performance – it only delegates test execution and gathers the results. It's when setting up the client (test agent), that we'll have to pay attention to the performance of the instance. Now, you keep clicking Next until you reach "Step 6. Configure Security Group". Here we need to add a firewall rule that allows us to access our WPT server (master) through HTTP, otherwise no testing will be possible. Giving the security group a more descriptive name and description (❸) is optional but nice: In step 7, review your settings if you want, then hit "Launch": As highlighted in the screen above, AWS will now want to assign a (ssh) key pair to this instance. In case you have an existing key pair you can re-use that. In case you're doing this for the first time, you won't have any existing key pairs to choose from and will have to create a new one. The "Launch Instances" button will activate only after you've downloaded your private key (❸): Clicking ❷ gets you to the Instances overview that was empty at the beginning where you'll find the public IP address and DNS entry of your instance: Congratulations, you've successfully completed the setup of the WPT server (master)! If you now open http://your.instance.ip you should see the WebPageTest UI: To log into your instance via SSH follow one of the guides here. In short: Either use ssh from the command line, available on all linuxes and even on Windows if you have Git installed, e.g. in C:\Program Files\Git\usr\bin: ssh -i wpt-server.pem ubuntu@[public-ip|public-dns] Or, on Windows, use PuTTY. In this case you'll first have to generate a PuTTY compatible private key file from your *.pem file and then you can connect through PuTTy. Here's how to do that. How to set up the client (WPT test agent) Now, we need to set up at least one test agent to actually execute some tests. There's a long list of pre-configured, regularly updated Windows AMIs with all software installed that's needed to execute tests in the documentation. To get started quickly, pick one that contains all major browsers and is located in your favorite region. In this guide, we're going to use ami-54291f49 (IE11/Chrome/Firefox/Safari) in region "eu-central (Frankfurt)". Basically, we repeat the steps from the master setup, but now using the test agent AMI. In step 2, when choosing an Instance Type, we'll now have to ensure that our agent will deliver consistent results. This performance review recommends the following choices (prices will vary by region, the ones displayed here were for US East N. Virginia), quoting: If you’re running just a couple tests per hour, on small HTTP sites, a t2.micro will be okay ($13/month) If you’re running just a couple tests per hour, on large or secure sites, you’ll need to use a t2.medium ($52/month) If you’re running lots of tests per hour, you can’t use t2’s – the most efficient agent will be a c3.large ($135/month) In step 3, we have to configure our test agent with the following information: where to find the WPT server (master): use the public IP address or DNS name what's the location (name) of this agent: a string used in the locations.ini of the master To be honest, I haven't quite wrapped my head around the auto-scaling feature of WPT. That's why we set up a single location ("first") manually that this client will be identified with. In the user data field under Advanced Details we enter: wpt_server=52.29.your.ip wpt_location=first Now, either click your way through the remaining steps or jump to "Review and Launch" and launch your test agent instance. The key pair dialog will pop up again, and now you can choose your existing key "wpt-server" to assign to that instance. You won't use it to connect to it, anyway, because the default connection type to a Windows instance is RDP for which a firewall rule was automatically added in step 6. After launching, a link will be available with instructions on how to connect to that Windows instance, but you shouldn't need to do that. Connecting master and client One step is left: we have to configure the master to know which test agents it can use. This part was actually one of the most tedious bits in the setup because juggling several configuration files with lots of options and entries to make them do what you want them to do is never easy. For the manual management of test agents we need to do the following: Log into the master, e.g. ssh -i wpt-server.pem ubuntu@pu.bl.ic.ip Go to the folder /var/www/webpagetest/www/settings/ Edit locations.ini to contain these blocks (sudo nano locations.ini): [locations]
1=first
default=first
[first]
1=first_wptdriver
2=first_ie
label="My first agent"
default=first_wptdriver
[first_wptdriver]
browser=Chrome,Firefox,Safari
[first_ie]
browser=IE 11
In settings.ini, at the very end, set ec2_locations=0 to hide the predefined EC2 locations from the location dropdown in the browser.
Restart NGNIX: sudo service nginx restart
Now, if you go to http://your.public.ip again, you'll see "My first agent" in the location dropdown and "Chrome", "Firefox", and "Safari (Windows)" in the browser dropdown. I didn't try to find out how to show "IE 11" as well, but at this moment I didn't care. (You might have to wait a few moments before the location lists update after the NGINX restart.)
You can now run your first test!
After some 10-15 seconds you should see this screen:
And a few moments later the first results should show. Congratulations!
Automating tests via the WebPageTest API
If you've tried to run WebPageTests in an automated way, you'll without a doubt have found the webpagetest node module. With your private server and test agent set up, you'll now need to dispatch your tests like so:
webpagetest test http://my.example.com ↵
--server http://<master_ip> ↵
--key <api_key> ↵
--location first_wptdriver:Chrome
The location argument refers to the definitions in the locations.ini file. The api key can be found in the keys.ini file on the master:
We run our test from within TeamCity using a custom script, but that's a topic for another post!
Happy WebPageTesting!
by Oliver
6. November 2015 21:32
This week we started to look into the page load performance at Camping.Info as well as our discoverize portals. After some initial testing and measuring, we came up with a list of action that should all speed up the user perceived page load times. The Problem Today we'll take a look at this one request: http://d2wy8f7a9ursnm.cloudfront.net/bugsnag-2.min.js. For your info, Bugsnag is an exception tracing and management solution that I can seriously recommend to have a look at. Anyway, in their docs the Bugsnag team suggests this: Include bugsnag.js from our CDN in the <head> tag of your website, before any other <script> tags. That's what we initially did. It turns out, though, that the request for the bugsnag javascript library is quite costly, especially looking at the DNS lookup time of 265ms. Here's a screenshot from a waterfall chart by GTmetrix: That's over half a second for a script of less than 3kB in size! If you have a look at the request for WebResource.axd?d= three lines below, you'll see that that resource was loaded faster than the DNS lookup for bugsnag took. Improve, Improve So let's just load the bugsnag library from our own server and save that longish DNS lookup. But, wait, we can even do better than this! We already load a bunch of javascript files as a bundle inside master_CD8711… (using the great SquishIt library, by the way) so we'll just prepend a copy of bugsnag to that bundle and save a whole request altogether! Now, that's great. And that's exactly what the crew at Bugsnag recommends for advanced usages: If you'd like to avoid an extra blocking request, you can include the javascript in your asset compilation process so that it is inlined into your existing script files. The only thing to be sure of is that Bugsnag is included before your onload handlers run. This is so that we can report stacktraces reliably. Disclaimer There's one drawback to this solution: you might not get the latest and greatest bits from Bugsnag hosting your own version. I've quickly brainstormed how to fix this issue and one way to guarantee a fresh (enough) version would be to check for the current version during your deployment process on your continuous integration server and throw an error if it's newer than the one that currently resides in our project Also, this is just one of several fixes to noticeably improve page load performance – we have bigger fish to catch looking at the bars up there! Now, let's get back to performance tuning! Oliver
by Oliver
27. June 2015 12:46
In the process of making Camping.Info more mobile-friendly, I've needed to move around pieces of HTML in the DOM time and again. At last, I've come up with two little helper functions that I wrapped into a little jQuery plugin that I want to share in this post. When To Use Move-Restore The DOM tree on every page on Camping.Info is quite large and often convoluted. At least partly this is a consequence of the many UserControls we use to build our pages on the server using ASP.NET WebForms. To achieve a more mobile-friendly layout of these pages we needed to position certain elements differently, hide some and show others, and in the end also move around some critical parts to fit the mobile design. Much of work could and has been done by our designer via CSS but for the rest of them we need to touch the DOM tree. Move-Restore proves especially helpful in the case of a user-agent switching between two different layouts of your site, e.g. the desktop and the mobile layout (in case you have just those two), because it easily allows you to restore elements you previously moved around. How to Use Move-Restore Just call $("#move-me").moveTo("#target") when you want to move something e.g. in your mobile layout and at a later point, e.g. when switching back to your desktop layout, call $("move-me").restore(). That's it. I've also put together a fiddle to show how to use the plugin here. Also, have a look at the usage.html in the below gist. How It Works The beauty of this plugin, in my opinion, lies in the fact that you don't have to manually keep track of where you get an element from to later restore it. Internally, the plugin inserts a <script> element in place of the moved element. The (highly probably) unique id of that script element is stored as a datum on the moved element and later retrievable when we need to restore the element to its original position. Currently, at revision 2 of the gist, there's one option you can tweak to match your scenario: the jQuery method the plugin should use to move the selected element(s) around. By default, move-restore uses appendTo but there are other sensible options, e.g. prependTo, insertAfter, or insertBefore. Just pass the one that fits your needs as the second optional argument to moveTo. Use Move-Restore at Your Convenience I invite everyone to try and use this handy little plugin and am open for feedback. Happy coding!
by Oliver
12. November 2014 13:42
This is a short overview post on OWIN, which (I quote from its homepage) […] defines a standard interface between .NET web servers and web applications. The goal of the OWIN interface is to decouple server and application, encourage the development of simple modules for .NET web development, and, by being an open standard, stimulate the open source ecosystem of .NET web development tools. In other words, the OWIN specification aims to put an end to monolithic solutions like ASP.NET WebForms or even ASP.NET MVC in favor of creating smaller, more lightweight application components that can be chained together to configure an application that does exactly what the author intends it to do – and nothing more. In addition, OWIN simplifies development of alternative web servers that can substitute IIS, e.g. Nowin, or Helios, a promising .NET server alternative on top of IIS but without the heavy, 15-year old System.Web monolith (here's a good review of Helios by Rick Strahl). Katana is a Microsoft project that contains OWIN-compatible components… […] for building and hosting OWIN-based web applications. For an overview of Katana look here. The Katana architecture can be found on the right and promotes exchangeability of components on each layer. It turns out that ASP.NET vNEXT (github repo here) continues the work that has been done by Microsoft in that direction. Here's an enlightening quote by David Fowler, development lead on the ASP.NET team: vNext is the successor to Katana (which is why they look so similar). Katana was the beginning of the break away from System.Web and to more modular components for the web stack. You can see vNext as a continuation of that work but going much further (new CLR, new Project System, new http abstractions). The future of ASP.NET looks bright – especially for developers! Check out my last post on ASP.NET vNEXT and Docker, too.