Run Web Application Deployment in TeamCity – Only on Weekdays

by Oliver 26. May 2016 22:12

We've experienced in the past that deploying new features before the weekend is not a good idea because potential bugs are not discovered in a timely manner and our reaction times to critical problems are also longer over the weekend than during the week. So for a couple of years now, we've stuck to our no-deployments-on-weekends policy, and every once in a while an exceptional deployment with either some "very important" new feature or "only small changes, nothing big" reminded us that it was really a good idea to deploy only on weekdays. There was one downside to this: we always missed out on getting fresh bits on our servers on Monday morning because someone had to trigger the deployment by doing a push to a dedicated Git repository before our TeamCity deployment configuration started to run at 2:20 am. Most of us usually don't work on Sundays so there usually was no-one to do that. Using a CRON expression in your trigger It turns out, TeamCity supports date/time triggers defined by a CRON expression: Since I don't use cron on a daily basis and don't speak cron fluently, I was glad to stumble upon Cron Maker which was a great help with getting the syntax right: Having a tool like this in my tool chain makes me confident and I won't avoid using CRON or its powerful expressions in the future! Happy cron'ing!

What's New In Ionic 2

by Oliver 23. May 2016 09:43

As of today, 20 May 2016, Ionic 2… is still in Beta status, the last update being beta 6 on 25 April 2016 sports the Ionic View app which allows to rapidly publish new versions of your Ionic or even Cordova app to Android and iOS devices [blog post here, 6 April 2016] is not yet supported as target by Creator, a drag-&-drop prototyping tool for Ionic apps [blog post here, 31 March 2016] has Windows Universal Platform App support, besides supporting Android and iOS [blog post here, 29 March 2016] can be used with Angular 2 but does not necessarily have to be prefers TypeScript as development language and is itself written in TypeScript A few valuable resources I stumbled upon while reading and following links: 260+ Ionic Framework Resources 60+ Ionic Framework 2 Resources 5 min Quickstart with Angular 2 (official site) Now, go and have fun with Ionic 2!

How to Quickly Set Up a Private WebPageTest Server And Client

by Oliver 14. November 2015 21:40

Get your own WebPageTest server and test agent up and running in minutes, not hours! Motivation The original documentation can be found here and here. Unfortunately, it's a bit vague in some parts, especially if you don't set up infrastructural pieces and cloud server instances on a daily basis. So here's a how-to guide to get you up and running as fast as possible. Infrastructure overview To run web page tests against your own private instance, we need: a WebPageTest server instance (the master) one ore more WebPageTest test agents (the clients) The master receives test jobs and delegates them to one of the clients. You can run tests from the web interface or using the API through the webpagetest node module. You might want to think about where in the world you want to spin up those virtual machines. The WPT server (master) can really be hosted anywhere you want but the test agents (client) location should be chosen conciously because their distance to the tested site's server will play a role in the results you will see later during testing. How to set up the master (WPT server) You need an Amazon AWS account to set this up quickly. If you haven't got one, you either quit here and set up your own server with the WebPageTest stack, or you go and create one. Now, go to your AWS dashboard, to Instances –> Instances, and click "Launch Instance": On the next screen, go to Community AMIs, enter one of the ids that can be found here – I chose ami-22cefd3f (eu-central-1) – and hit "Select": In step 2, you can choose a t2.micro instance. The WPT server does not need to be high performance – it only delegates test execution and gathers the results. It's when setting up the client (test agent), that we'll have to pay attention to the performance of the instance. Now, you keep clicking Next until you reach "Step 6. Configure Security Group". Here we need to add a firewall rule that allows us to access our WPT server (master) through HTTP, otherwise no testing will be possible. Giving the security group a more descriptive name and description (❸) is optional but nice: In step 7, review your settings if you want, then hit "Launch": As highlighted in the screen above, AWS will now want to assign a (ssh) key pair to this instance. In case you have an existing key pair you can re-use that. In case you're doing this for the first time, you won't have any existing key pairs to choose from and will have to create a new one. The "Launch Instances" button will activate only after you've downloaded your private key (❸): Clicking ❷ gets you to the Instances overview that was empty at the beginning where you'll find the public IP address and DNS entry of your instance: Congratulations, you've successfully completed the setup of the WPT server (master)! If you now open http://your.instance.ip you should see the WebPageTest UI: To log into your instance via SSH follow one of the guides here. In short: Either use ssh from the command line, available on all linuxes and even on Windows if you have Git installed, e.g. in C:\Program Files\Git\usr\bin: ssh -i wpt-server.pem ubuntu@[public-ip|public-dns] Or, on Windows, use PuTTY. In this case you'll first have to generate a PuTTY compatible private key file from your *.pem file and then you can connect through PuTTy. Here's how to do that. How to set up the client (WPT test agent) Now, we need to set up at least one test agent to actually execute some tests. There's a long list of pre-configured, regularly updated Windows AMIs with all software installed that's needed to execute tests in the documentation. To get started quickly, pick one that contains all major browsers and is located in your favorite region. In this guide, we're going to use ami-54291f49 (IE11/Chrome/Firefox/Safari) in region "eu-central (Frankfurt)". Basically, we repeat the steps from the master setup, but now using the test agent AMI. In step 2, when choosing an Instance Type, we'll now have to ensure that our agent will deliver consistent results. This performance review recommends the following choices (prices will vary by region, the ones displayed here were for US East N. Virginia), quoting: If you’re running just a couple tests per hour, on small HTTP sites, a t2.micro will be okay ($13/month) If you’re running just a couple tests per hour, on large or secure sites, you’ll need to use a t2.medium ($52/month) If you’re running lots of tests per hour, you can’t use t2’s –  the most efficient agent will be a c3.large ($135/month) In step 3, we have to configure our test agent with the following information: where to find the WPT server (master): use the public IP address or DNS name what's the location (name) of this agent: a string used in the locations.ini of the master To be honest, I haven't quite wrapped my head around the auto-scaling feature of WPT. That's why we set up a single location ("first") manually that this client will be identified with. In the user data field under Advanced Details we enter: wpt_server=52.29.your.ip   wpt_location=first Now, either click your way through the remaining steps or jump to "Review and Launch" and launch your test agent instance. The key pair dialog will pop up again, and now you can choose your existing key "wpt-server" to assign to that instance. You won't use it to connect to it, anyway, because the default connection type to a Windows instance is RDP for which a firewall rule was automatically added in step 6. After launching, a link will be available with instructions on how to connect to that Windows instance, but you shouldn't need to do that. Connecting master and client One step is left: we have to configure the master to know which test agents it can use. This part was actually one of the most tedious bits in the setup because juggling several configuration files with lots of options and entries to make them do what you want them to do is never easy. For the manual management of test agents we need to do the following: Log into the master, e.g. ssh -i wpt-server.pem ubuntu@pu.bl.ic.ip Go to the folder /var/www/webpagetest/www/settings/ Edit locations.ini to contain these blocks (sudo nano locations.ini): [locations] 1=first default=first [first] 1=first_wptdriver 2=first_ie label="My first agent" default=first_wptdriver [first_wptdriver] browser=Chrome,Firefox,Safari [first_ie] browser=IE 11 In settings.ini, at the very end, set ec2_locations=0 to hide the predefined EC2 locations from the location dropdown in the browser. Restart NGNIX: sudo service nginx restart Now, if you go to http://your.public.ip again, you'll see "My first agent" in the location dropdown and "Chrome", "Firefox", and "Safari (Windows)" in the browser dropdown. I didn't try to find out how to show "IE 11" as well, but at this moment I didn't care. (You might have to wait a few moments before the location lists update after the NGINX restart.) You can now run your first test! After some 10-15 seconds you should see this screen: And a few moments later the first results should show. Congratulations! Automating tests via the WebPageTest API If you've tried to run WebPageTests in an automated way, you'll without a doubt have found the webpagetest node module. With your private server and test agent set up, you'll now need to dispatch your tests like so: webpagetest test http://my.example.com ↵ --server http://<master_ip> ↵ --key <api_key> ↵ --location first_wptdriver:Chrome The location argument refers to the definitions in the locations.ini file. The api key can be found in the keys.ini file on the master: We run our test from within TeamCity using a custom script, but that's a topic for another post! Happy WebPageTesting!

Page Load Performance – Avoid Blocking Requests on External Domains

by Oliver 6. November 2015 21:32

This week we started to look into the page load performance at Camping.Info as well as our discoverize portals. After some initial testing and measuring, we came up with a list of action that should all speed up the user perceived page load times. The Problem Today we'll take a look at this one request: http://d2wy8f7a9ursnm.cloudfront.net/bugsnag-2.min.js. For your info, Bugsnag is an exception tracing and management solution that I can seriously recommend to have a look at. Anyway, in their docs the Bugsnag team suggests this: Include bugsnag.js from our CDN in the <head> tag of your website, before any other <script> tags. That's what we initially did. It turns out, though, that the request for the bugsnag javascript library is quite costly, especially looking at the DNS lookup time of 265ms. Here's a screenshot from a waterfall chart by GTmetrix: That's over half a second for a script of less than 3kB in size! If you have a look at the request for WebResource.axd?d= three lines below, you'll see that that resource was loaded faster than the DNS lookup for bugsnag took. Improve, Improve So let's just load the bugsnag library from our own server and save that longish DNS lookup. But, wait, we can even do better than this! We already load a bunch of javascript files as a bundle inside master_CD8711… (using the great SquishIt library, by the way) so we'll just prepend a copy of bugsnag to that bundle and save a whole request altogether! Now, that's great. And that's exactly what the crew at Bugsnag recommends for advanced usages: If you'd like to avoid an extra blocking request, you can include the javascript in your asset compilation process so that it is inlined into your existing script files. The only thing to be sure of is that Bugsnag is included before your onload handlers run. This is so that we can report stacktraces reliably. Disclaimer There's one drawback to this solution: you might not get the latest and greatest bits from Bugsnag hosting your own version. I've quickly brainstormed how to fix this issue and one way to guarantee a fresh (enough) version would be to check for the current version during your deployment process on your continuous integration server and throw an error if it's newer than the one that currently resides in our project Also, this is just one of several fixes to noticeably improve page load performance – we have bigger fish to catch looking at the bars up there! Now, let's get back to performance tuning! Oliver

Move-Restore – A Little jQuery Plugin to Help Make Your Existing HTML More Mobile-Friendly

by Oliver 27. June 2015 12:46

In the process of making Camping.Info more mobile-friendly, I've needed to move around pieces of HTML in the DOM time and again. At last, I've come up with two little helper functions that I wrapped into a little jQuery plugin that I want to share in this post. When To Use Move-Restore The DOM tree on every page on Camping.Info is quite large and often convoluted. At least partly this is a consequence of the many UserControls we use to build our pages on the server using ASP.NET WebForms. To achieve a more mobile-friendly layout of these pages we needed to position certain elements differently, hide some and show others, and in the end also move around some critical parts to fit the mobile design. Much of work could and has been done by our designer via CSS but for the rest of them we need to touch the DOM tree. Move-Restore proves especially helpful in the case of a user-agent switching between two different layouts of your site, e.g. the desktop and the mobile layout (in case you have just those two), because it easily allows you to restore elements you previously moved around. How to Use Move-Restore Just call $("#move-me").moveTo("#target") when you want to move something e.g. in your mobile layout and at a later point, e.g. when switching back to your desktop layout, call $("move-me").restore(). That's it. I've also put together a fiddle to show how to use the plugin here. Also, have a look at the usage.html in the below gist. How It Works The beauty of this plugin, in my opinion, lies in the fact that you don't have to manually keep track of where you get an element from to later restore it. Internally, the plugin inserts a <script> element in place of the moved element. The (highly probably) unique id of that script element is stored as a datum on the moved element and later retrievable when we need to restore the element to its original position. Currently, at revision 2 of the gist, there's one option you can tweak to match your scenario: the jQuery method the plugin should use to move the selected element(s) around. By default, move-restore uses appendTo but there are other sensible options, e.g. prependTo, insertAfter, or insertBefore. Just pass the one that fits your needs as the second optional argument to moveTo. Use Move-Restore at Your Convenience I invite everyone to try and use this handy little plugin and am open for feedback. Happy coding!

ASP.NET, OWIN and Katana

by Oliver 12. November 2014 13:42

This is a short overview post on OWIN, which (I quote from its homepage) […] defines a standard interface between .NET web servers and web applications. The goal of the OWIN interface is to decouple server and application, encourage the development of simple modules for .NET web development, and, by being an open standard, stimulate the open source ecosystem of .NET web development tools. In other words, the OWIN specification aims to put an end to monolithic solutions like ASP.NET WebForms or even ASP.NET MVC in favor of creating smaller, more lightweight application components that can be chained together to configure an application that does exactly what the author intends it to do – and nothing more. In addition, OWIN simplifies development of alternative web servers that can substitute IIS, e.g. Nowin, or Helios, a promising .NET server alternative on top of IIS but without the heavy, 15-year old System.Web monolith (here's a good review of Helios by Rick Strahl). Katana is a Microsoft project that contains OWIN-compatible components… […] for building and hosting OWIN-based web applications. For an overview of Katana look here. The Katana architecture can be found on the right and promotes exchangeability of components on each layer. It turns out that ASP.NET vNEXT (github repo here) continues the work that has been done by Microsoft in that direction. Here's an enlightening quote by David Fowler, development lead on the ASP.NET team: vNext is the successor to Katana (which is why they look so similar). Katana was the beginning of the break away from System.Web and to more modular components for the web stack. You can see vNext as a continuation of that work but going much further (new CLR, new Project System, new http abstractions). The future of ASP.NET looks bright – especially for developers! Check out my last post on ASP.NET vNEXT and Docker, too.

Getting Started With Meteor On Windows – Or Not

by Oliver 2. October 2014 18:30

Recently, we at teamaton decided to take a break from Camping.Info and discoverize to dive into something new. We wanted to breathe some fresh air, open our eyes and minds for a world outside of our day-to-day development stack of ASP.NET. During the process we also wanted to check out some tools, frameworks, environments that we'd heard of or read about but hadn't found the time to really take a closer look at. One of these was Meteor. What's Meteor about The elevator pitch on meteor.com reads like this: Meteor is an open-source platform for building top-quality web apps in a fraction of the time, whether you're an expert developer or just getting started. I really encourage you, dear Reader, to check out the Meteor site and take a look at what's waiting for you. I really liked what I found there: Pure JavaScript. Live page updates. Hot Code Pushes. Fully self-contained application bundles. And more. That's stuff, .NET developers aren't used to. Setting up Meteor on Windows [Note: Before proceeding, please have a look at Meteor's Supported Platforms page just to make sure you're not missing out on anything new.] This part took me about 2h to get right, so I took notes to save my colleagues or anyone else interested some precious time. I basically followed this comprehensive guide (and I encourage you to do the same) but made a few adjustments. [A word of warning: Running my first out of the box sample failed and that's where I stopped.] Install Git and add C:\Program Files (x86)\Git\bin to your PATH variable Download and install VirtualBox > 4.3.12 from here: https://www.virtualbox.org/wiki/Downloads Download and install Vagrant >= 1.6.3 from here: http://www.vagrantup.com/downloads.html Install the ChefDK from here: http://downloads.getchef.com/chef-dk/windows/ Now start a command line and run: vagrant plugin install vagrant-berkshelf --plugin-version ">= 2.0.1" vagrant plugin install vagrant-vbguest vagrant plugin install vagrant-omnibus Create C:\vagrant and cd into it, then call: git clone git://github.com/shoebappa/vagrant-meteor-windows.git meteor Edit C:\vagrant\meteor\Vagrantfile to use a new virtual machine image (the old one wouldn't work with Meteor's current version):  In line 8, set: config.vm.box = "trusty64" In line 10, set: config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box" In Iine 36, use a different name for your first app Go to C:\vagrant\meteor and initiate Vagrant (will take a while): vagrant up Now start your app by: cd C:\vagrant\meteor vagrant ssh cd /vagrant/mymeteorapp mrt run Running the app threw a NullReferenceException (or something alike) at me that was triggered from somewhere inside a javascript file – I don't remember exactly where. I tried fiddling around with it for another hour or so, especially when I read that Meteorite (mrt) is no longer needed with Meteor versions >= 0.9.0. But without success. What to do now? I shied away from it but you might just feel like setting up a dev environment on a Linux OS and simply running the few lines from the Meteor home page: $ curl https://install.meteor.com/ | sh It seems to simple to be true. But wait! Official Windows support is coming to Meteor – sometime in the near or farther future. So please check if the future isn't already here! What we did We went to grab Angular.js and haven't looked back. Happy coding!

Sending DateTime Instances From a .NET Server to a JavaScript Client and vice-versa

by Oliver 20. September 2014 21:47

At teamaton we're currently developing our own Time Tracking tool that we'll be using instead of KeepTempo as soon as it's good enough. We even plan on making it accessible to the public later but that's a different story. We chose Angular.js to develop the frontend and now want to synchronize our time tracking records with our ASP.NET WebAPI backend. One property of such a record is the date and time it was created at, createdUtc. Let's look at how we can send those date-time values to the server and back again. Some Background on Date and DateTime JavaScript Date instances are seeded at 01/01/1970 00:00:00 and can be instantiated by passing to the Date() constructor the number of milliseconds that have passed since that moment in time. Date.now() will directly output this number, at the moment of writing these words it returned the value 1 411 240 284 042. This value is what all other representations of a given Date instance will be based on. In .NET we have the DateTime type and its seed value is 01/01/0001 00:00:00. Internally, DateTime instances count time in Ticks, where one tick equals 100 nanoseconds, i.e. 0.000 000 1 seconds, thus delivering a theoretical precision of 10000 times that of the JavaScript equivalent. When writing these words, the current number of Ticks of DateTime.Now was 635 468 449 058 588 764 (I use LINQPad to execute C# code snippets). How to Convert JavaScript Date to .NET DateTime So how do we convert those values into each other? If we decide to send the number of milliseconds in a given JavaScript Date instance to our .NET server we can reconstruct an equivalent DateTime instance by seeding it with 01/01/1970 and just adding the milliseconds we got from the client to it: Record.CreatedUtc = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).AddMilliseconds(milliseconds); Going the other way, we actually need to get hold of a TimeSpan instance which has the property TotalMilliseconds that gives us what we want. We'll do that by substracting a DateTime instance representing the 01/01/1970 from the DateTime instance we want to send to the client: (Record.CreatedUtc - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalMilliseconds The code looks a bit ugly because we take care to work with UTC dates so that our code will run the same for clients and servers around the world. It's a very good idea to work with UTC dates internally and only convert them to local dates when you want to display them somewhere. … or Just Use Date Strings There's another way to transfer date objects between JavaScript and your server: using a standardized string representation that both sides will be able to generate and parse. In JavaScript you would use Date.toISOString() and in .NET DateTime.ToString("O") (see the MSDN for the O format string). Happy coding!

SSL Certificate Jungle – Multiple Domains, Wildcards, and Both

by Oliver 10. September 2014 22:05

Recently, we've decided to add https:// support to Camping.Info. Since we've been running our application servers behind an NGINX reverse proxy for a while now, the natural choice in our setup was to terminate the secured connections at the NGINX server which has CPU usage values somewhere between 1% and 5%. This is also call SSL offloading and will allow us to keep all the SSL setup and potential runtime overhead off of our application servers. Certificate Options On Camping.Info, we serve almost all static content from the domain images-camping.info. Since we want to secure the whole site, we need to have valid ssl certificates for images-camping.info and all subdomains of camping.info. Using Separate Certificates The first solution to the problem would be using one SSL certificate for images-camping.info and one (wildcard) certificate for *.camping.info that would secure all subdomains of camping.info. If we wanted to secure camping.info itself as well, we would need a third certificate just for that one domain, because wildcard certificates do not cover the parent domain without a subdomain. Using a SAN (or UCC) Certificate Subject Alternative Names (SAN) can help in this situation. The subjectAltName field of an SSL certificate can contain many domain names that will be secured by that certificate. In our scenario we could have put images-camping.info, camping.info, www.camping.info, en.camping.info, and the other over 25 subdomains in there. Unfortunately, that would complicate things for the use of new subdomains in the future which would be missing from the list. A wildcard certificate really seems like the natural choice when you have more than 5 subdomains to secure or are expecting to have more of them in the near future. Using a Wildcard Certificate with SANs It turns out that wildcard certificates can well be combined with the usage of the subjectAltName field. Most CAs make you pay quite a lot for this combination, but we've also found a quite affordable offer on www.startssl.com. Multiple SSL Certificates on a Single NGINX Instance – Beware Choosing the first certificate option, i.e. using at least two certificates we now need to install both of them on our NGINX reverse proxy server. This blog post on how to install multiple SSL certificates on NGINX is a very good read – but be sure to read the comments as well. It turns out that the Server Name Indication (SNI) extension to the TLS protocol that allows you to do so will lock out clients that don't support SNI. The most prominent example of such a client is any version of Internet Explorer running on Windows XP, and even though Microsoft has ended support of XP almost half a year ago, we're still seeing 11% of our Windows users running XP accounting for 6% of our total traffic – a number we cannot ignore. Wanting to use separate SSL certificates on one NGINX instance we would need two different IP addresses pointing to that same server so that each certificate could respond to requests on one of those addresses. This would both complicate our setup and incur higher monthly infrastructural costs that we'd gladly avoid. Installing a Single SSL Certificate on NGINX The option we finally chose is to use a wildcard SAN certificate where we'd enter images-camping.info, camping.info and *.camping.info as the different subject alternative names. Installing that into NGINX is straight-forward if you know how. Happy SSL'ing!

Remove The Padding From The Google Map API v3 fitBounds() Method

by Oliver 4. July 2014 21:56

In our customizable web portal platform discoverize we offer searching for results using a Google Map. In a recent iteration, we were trying to improve the overall map usage experience, and one thing we wanted to do was to zoom into the map as far as possible for all results to still appear on the map. map.fitBounds(map.getBounds()); – does not do what you would like it to The natural choice to achieve that would be to use the fitBounds method on the Map object with the bounding rectangle for the coordinates of all results. Unfortunately, though, Google chose to add a non-configurable 45px margin around that bounding box so that in a lot of cases the map appears to be zoomed out too far for what would be possible. That's why map.fitBounds(map.getBounds()); will zoom the map out! Zoom in if you can After a bit of searching I found a workaround on this Google Groups thread: map.fitBounds(map.getBounds()) zooms out map. The idea behind the solution provided there is to check whether the bounds to fit on the map wouldn't still fit using a higher zoom level and if yes, apply that zoom level. Since I had some problems with the code from the thread I reworked it slightly and now have this: function myFitBounds(myMap, bounds) { myMap.fitBounds(bounds); // calling fitBounds() here to center the map for the bounds var overlayHelper = new google.maps.OverlayView(); overlayHelper.draw = function () { if (!this.ready) { var extraZoom = getExtraZoom(this.getProjection(), bounds, myMap.getBounds()); if (extraZoom > 0) { myMap.setZoom(myMap.getZoom() + extraZoom); } this.ready = true; google.maps.event.trigger(this, 'ready'); } }; overlayHelper.setMap(myMap); } function getExtraZoom(projection, expectedBounds, actualBounds) { // in: LatLngBounds bounds -> out: height and width as a Point function getSizeInPixels(bounds) { var sw = projection.fromLatLngToContainerPixel(bounds.getSouthWest()); var ne = projection.fromLatLngToContainerPixel(bounds.getNorthEast()); return new google.maps.Point(Math.abs(sw.y - ne.y), Math.abs(sw.x - ne.x)); } var expectedSize = getSizeInPixels(expectedBounds), actualSize = getSizeInPixels(actualBounds); if (Math.floor(expectedSize.x) == 0 || Math.floor(expectedSize.y) == 0) { return 0; } var qx = actualSize.x / expectedSize.x; var qy = actualSize.y / expectedSize.y; var min = Math.min(qx, qy); if (min < 1) { return 0; } return Math.floor(Math.log(min) / Math.LN2 /* = log2(min) */); } Replace map.fitBounds(bounds) with myFitBounds(map, bounds) That's all you have to do to zoom in as far as possible while keeping your bounds on the map. Happy coding!

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for jQuery. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our daughter. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.