Getting Started With Meteor On Windows – Or Not

by Oliver 2. October 2014 18:30

Recently, we at teamaton decided to take a break from Camping.Info and discoverize to dive into something new. We wanted to breathe some fresh air, open our eyes and minds for a world outside of our day-to-day development stack of ASP.NET. During the process we also wanted to check out some tools, frameworks, environments that we'd heard of or read about but hadn't found the time to really take a closer look at. One of these was Meteor. What's Meteor about The elevator pitch on meteor.com reads like this: Meteor is an open-source platform for building top-quality web apps in a fraction of the time, whether you're an expert developer or just getting started. I really encourage you, dear Reader, to check out the Meteor site and take a look at what's waiting for you. I really liked what I found there: Pure JavaScript. Live page updates. Hot Code Pushes. Fully self-contained application bundles. And more. That's stuff, .NET developers aren't used to. Setting up Meteor on Windows [Note: Before proceeding, please have a look at Meteor's Supported Platforms page just to make sure you're not missing out on anything new.] This part took me about 2h to get right, so I took notes to save my colleagues or anyone else interested some precious time. I basically followed this comprehensive guide (and I encourage you to do the same) but made a few adjustments. [A word of warning: Running my first out of the box sample failed and that's where I stopped.] Install Git and add C:\Program Files (x86)\Git\bin to your PATH variable Download and install VirtualBox > 4.3.12 from here: https://www.virtualbox.org/wiki/Downloads Download and install Vagrant >= 1.6.3 from here: http://www.vagrantup.com/downloads.html Install the ChefDK from here: http://downloads.getchef.com/chef-dk/windows/ Now start a command line and run: vagrant plugin install vagrant-berkshelf --plugin-version ">= 2.0.1" vagrant plugin install vagrant-vbguest vagrant plugin install vagrant-omnibus Create C:\vagrant and cd into it, then call: git clone git://github.com/shoebappa/vagrant-meteor-windows.git meteor Edit C:\vagrant\meteor\Vagrantfile to use a new virtual machine image (the old one wouldn't work with Meteor's current version):  In line 8, set: config.vm.box = "trusty64" In line 10, set: config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box" In Iine 36, use a different name for your first app Go to C:\vagrant\meteor and initiate Vagrant (will take a while): vagrant up Now start your app by: cd C:\vagrant\meteor vagrant ssh cd /vagrant/mymeteorapp mrt run Running the app threw a NullReferenceException (or something alike) at me that was triggered from somewhere inside a javascript file – I don't remember exactly where. I tried fiddling around with it for another hour or so, especially when I read that Meteorite (mrt) is no longer needed with Meteor versions >= 0.9.0. But without success. What to do now? I shied away from it but you might just feel like setting up a dev environment on a Linux OS and simply running the few lines from the Meteor home page: $ curl https://install.meteor.com/ | sh It seems to simple to be true. But wait! Official Windows support is coming to Meteor – sometime in the near or farther future. So please check if the future isn't already here! What we did We went to grab Angular.js and haven't looked back. Happy coding!

Sending DateTime Instances From a .NET Server to a JavaScript Client and vice-versa

by Oliver 20. September 2014 21:47

At teamaton we're currently developing our own Time Tracking tool that we'll be using instead of KeepTempo as soon as it's good enough. We even plan on making it accessible to the public later but that's a different story. We chose Angular.js to develop the frontend and now want to synchronize our time tracking records with our ASP.NET WebAPI backend. One property of such a record is the date and time it was created at, createdUtc. Let's look at how we can send those date-time values to the server and back again. Some Background on Date and DateTime JavaScript Date instances are seeded at 01/01/1970 00:00:00 and can be instantiated by passing to the Date() constructor the number of milliseconds that have passed since that moment in time. Date.now() will directly output this number, at the moment of writing these words it returned the value 1 411 240 284 042. This value is what all other representations of a given Date instance will be based on. In .NET we have the DateTime type and its seed value is 01/01/0001 00:00:00. Internally, DateTime instances count time in Ticks, where one tick equals 100 nanoseconds, i.e. 0.000 000 1 seconds, thus delivering a theoretical precision of 10000 times that of the JavaScript equivalent. When writing these words, the current number of Ticks of DateTime.Now was 635 468 449 058 588 764 (I use LINQPad to execute C# code snippets). How to Convert JavaScript Date to .NET DateTime So how do we convert those values into each other? If we decide to send the number of milliseconds in a given JavaScript Date instance to our .NET server we can reconstruct an equivalent DateTime instance by seeding it with 01/01/1970 and just adding the milliseconds we got from the client to it: Record.CreatedUtc = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).AddMilliseconds(milliseconds); Going the other way, we actually need to get hold of a TimeSpan instance which has the property TotalMilliseconds that gives us what we want. We'll do that by substracting a DateTime instance representing the 01/01/1970 from the DateTime instance we want to send to the client: (Record.CreatedUtc - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalMilliseconds The code looks a bit ugly because we take care to work with UTC dates so that our code will run the same for clients and servers around the world. It's a very good idea to work with UTC dates internally and only convert them to local dates when you want to display them somewhere. … or Just Use Date Strings There's another way to transfer date objects between JavaScript and your server: using a standardized string representation that both sides will be able to generate and parse. In JavaScript you would use Date.toISOString() and in .NET DateTime.ToString("O") (see the MSDN for the O format string). Happy coding!

SSL Certificate Jungle – Multiple Domains, Wildcards, and Both

by Oliver 10. September 2014 22:05

Recently, we've decided to add https:// support to Camping.Info. Since we've been running our application servers behind an NGINX reverse proxy for a while now, the natural choice in our setup was to terminate the secured connections at the NGINX server which has CPU usage values somewhere between 1% and 5%. This is also call SSL offloading and will allow us to keep all the SSL setup and potential runtime overhead off of our application servers. Certificate Options On Camping.Info, we serve almost all static content from the domain images-camping.info. Since we want to secure the whole site, we need to have valid ssl certificates for images-camping.info and all subdomains of camping.info. Using Separate Certificates The first solution to the problem would be using one SSL certificate for images-camping.info and one (wildcard) certificate for *.camping.info that would secure all subdomains of camping.info. If we wanted to secure camping.info itself as well, we would need a third certificate just for that one domain, because wildcard certificates do not cover the parent domain without a subdomain. Using a SAN (or UCC) Certificate Subject Alternative Names (SAN) can help in this situation. The subjectAltName field of an SSL certificate can contain many domain names that will be secured by that certificate. In our scenario we could have put images-camping.info, camping.info, www.camping.info, en.camping.info, and the other over 25 subdomains in there. Unfortunately, that would complicate things for the use of new subdomains in the future which would be missing from the list. A wildcard certificate really seems like the natural choice when you have more than 5 subdomains to secure or are expecting to have more of them in the near future. Using a Wildcard Certificate with SANs It turns out that wildcard certificates can well be combined with the usage of the subjectAltName field. Most CAs make you pay quite a lot for this combination, but we've also found a quite affordable offer on www.startssl.com. Multiple SSL Certificates on a Single NGINX Instance – Beware Choosing the first certificate option, i.e. using at least two certificates we now need to install both of them on our NGINX reverse proxy server. This blog post on how to install multiple SSL certificates on NGINX is a very good read – but be sure to read the comments as well. It turns out that the Server Name Indication (SNI) extension to the TLS protocol that allows you to do so will lock out clients that don't support SNI. The most prominent example of such a client is any version of Internet Explorer running on Windows XP, and even though Microsoft has ended support of XP almost half a year ago, we're still seeing 11% of our Windows users running XP accounting for 6% of our total traffic – a number we cannot ignore. Wanting to use separate SSL certificates on one NGINX instance we would need two different IP addresses pointing to that same server so that each certificate could respond to requests on one of those addresses. This would both complicate our setup and incur higher monthly infrastructural costs that we'd gladly avoid. Installing a Single SSL Certificate on NGINX The option we finally chose is to use a wildcard SAN certificate where we'd enter images-camping.info, camping.info and *.camping.info as the different subject alternative names. Installing that into NGINX is straight-forward if you know how. Happy SSL'ing!

Remove The Padding From The Google Map API v3 fitBounds() Method

by Oliver 4. July 2014 21:56

In our customizable web portal platform discoverize we offer searching for results using a Google Map. In a recent iteration, we were trying to improve the overall map usage experience, and one thing we wanted to do was to zoom into the map as far as possible for all results to still appear on the map. map.fitBounds(map.getBounds()); – does not do what you would like it to The natural choice to achieve that would be to use the fitBounds method on the Map object with the bounding rectangle for the coordinates of all results. Unfortunately, though, Google chose to add a non-configurable 45px margin around that bounding box so that in a lot of cases the map appears to be zoomed out too far for what would be possible. That's why map.fitBounds(map.getBounds()); will zoom the map out! Zoom in if you can After a bit of searching I found a workaround on this Google Groups thread: map.fitBounds(map.getBounds()) zooms out map. The idea behind the solution provided there is to check whether the bounds to fit on the map wouldn't still fit using a higher zoom level and if yes, apply that zoom level. Since I had some problems with the code from the thread I reworked it slightly and now have this: function myFitBounds(myMap, bounds) { myMap.fitBounds(bounds); // calling fitBounds() here to center the map for the bounds var overlayHelper = new google.maps.OverlayView(); overlayHelper.draw = function () { if (!this.ready) { var extraZoom = getExtraZoom(this.getProjection(), bounds, myMap.getBounds()); if (extraZoom > 0) { myMap.setZoom(myMap.getZoom() + extraZoom); } this.ready = true; google.maps.event.trigger(this, 'ready'); } }; overlayHelper.setMap(myMap); } function getExtraZoom(projection, expectedBounds, actualBounds) { // in: LatLngBounds bounds -> out: height and width as a Point function getSizeInPixels(bounds) { var sw = projection.fromLatLngToContainerPixel(bounds.getSouthWest()); var ne = projection.fromLatLngToContainerPixel(bounds.getNorthEast()); return new google.maps.Point(Math.abs(sw.y - ne.y), Math.abs(sw.x - ne.x)); } var expectedSize = getSizeInPixels(expectedBounds), actualSize = getSizeInPixels(actualBounds); if (Math.floor(expectedSize.x) == 0 || Math.floor(expectedSize.y) == 0) { return 0; } var qx = actualSize.x / expectedSize.x; var qy = actualSize.y / expectedSize.y; var min = Math.min(qx, qy); if (min < 1) { return 0; } return Math.floor(Math.log(min) / Math.LN2 /* = log2(min) */); } Replace map.fitBounds(bounds) with myFitBounds(map, bounds) That's all you have to do to zoom in as far as possible while keeping your bounds on the map. Happy coding!

Delete a Large Number of Rows from a Table in SQL Server

by Oliver 28. May 2014 12:09

Recently, we had to make some space available in one of our SQL Express instances that was getting close to its 10 GB limit of stored data, so I set out to delete some old data from two of our largest tables. One contained about half a million rows, the other a bit over 21 million. Simple Deletion Would Take… Forever The simplest sql statement to delete all rows that were created before 2012 would be the following: DELETE FROM [dbo].[Message] WHERE DateCreated < '20120101' I can't even tell you how long this took because at 14 minutes I just cancelled the query execution (which took another 7 minutes to finish). This was the table with less than 500,000 rows where we wanted to delete a bit more than 200,000 rows. Break Delete Operation Into Chunks Searching for a solution to the problem, I came across this blog post on breaking large delete operations into chunks. It shows in good detail how the simple version above behaves against running a loop of a few tens of thousand deletes per iteration. An interesting aspect I hadn't thought of at that point was the transaction log growth that can become a problem with large delete operations. Running a loop allows you to do a log backup (in full recovery mode) or a checkpoint (in simple mode) at the end of each iteration so that the log will grow much more slowly. Unfortunately, though, this didn't help with the execution time of the delete itself, as you can also see from the graphs presented in above post. Disable Those Indexes! It turns out, our [Message] table had six non-clustered indexes on them which all had to be written to for every row that was deleted. Even if those operations are fast, their processing time will add up over a few hundred thousand iterations. So let's turn them off! In fact, let's turn only those off that won't be used during out delete query. [We have one index on the DateCreated column which will be helpful during execution.] This stackoverflow answer shows how to create some dynamic SQL to disable all non-clustered indexex in a database. I've modified it slightly to disable only indexes of a given table: Disable/Enable Table Indexes DECLARE @table AS VARCHAR(MAX) = 'Message'; DECLARE @sqlDisable AS VARCHAR(MAX) = ''; DECLARE @sqlEnable AS VARCHAR(MAX) = '';   SELECT     @sqlDisable = @sqlDisable + 'ALTER INDEX ' + idx.name + ' ON '                     + obj.name + ' DISABLE;' + CHAR(13) + CHAR(10),     @sqlEnable = @sqlEnable + 'ALTER INDEX ' + idx.name + ' ON '                     + obj.name + ' REBUILD;' + CHAR(13) + CHAR(10) FROM sys.indexes idx JOIN sys.objects obj     ON idx.object_id = obj.object_id WHERE idx.type_desc = 'NONCLUSTERED'     AND obj.type_desc = 'USER_TABLE'     AND obj.name = @table;   RAISERROR(@sqlDisable, 0, 1) WITH NOWAIT; RAISERROR(@sqlEnable, 0, 1) WITH NOWAIT; --EXEC(@sqlDisable); --EXEC(@sqlEnable); Now, with those indexes disabled, the simple delete operation took a lot less than a minute! Even in the case of our 21 million rows table, deleting 7 million rows took only 1:02 on my machine. Of course, after deleting the unwanted rows, you need to re-enable the indexes again which took another minute, but all in all I'm happy with the result. Copy Data to New Table and Drop Old Table One other way of deleting rows that I've used in combination with changing the table schema at the same time is the following: use a temporary table into which you copy all the rows you want to keep (the schema of which I modified to meet our new needs) delete the original table rename the temporary table to the original table's name recreate all indexes you had defined before This is basically what SSMS generates for you when you change the schema of a table, except for the indexes – you have to recreate them yourself. As you can imagine, this approach becomes faster and creates smaller transaction log footprint with a growing amount of data to delete. It won't have any benefit if you delete less than half of the table's rows. Choose the right tool for the job There are quite a few other approaches and tips out there on how to speed up your deletion process. It depends a lot on your concrete situation which of those will actually help you get your deletion job done faster. I had to experiment quite a bit to find the sweet spot but now that I've seen a few approaches I'm able to take a better decision in the future.

Retrieving random content items (rows) from a SQL database in Orchard with HQL queries

by Oliver 22. February 2014 12:37

We're adding some Premium functionality to discoverize right now, and part of that is the so-called Premium block which is a showcase of six Premium entries. Now, choosing the right entries for that block is the interesting part: as long as we don't have six Premium entries to show, we want to fill up the left over space with some random entries that haven't booked our Premium feature, yet. Get random rows from SQL database There are plenty of articles and stackoverflow discussions on the topic of how to (quickly) retrieve some random rows from a SQL database. I wanted to get something to work simply and quickly, not necessarily high performance. Incorporating any kind of hand-crafted SQL query was really the last option since it would mean to get hold of an ISessionLocator instance to get at the underlying NHibernate ISession to then create a custom SQL query and execute it. Not my favorite path, really. Luckily, the IContentManager interface contains the method HqlQuery which returns an IHqlQuery containing these interesting details: /// <summary> /// Adds a join to a specific relationship. /// </summary> /// <param name="alias">An expression pointing to the joined relationship.</param> /// <param name="order">An order expression.</param> IHqlQuery OrderBy(Action<IAliasFactory> alias, Action<IHqlSortFactory> order); …and IHqlSortFactory contains a Random() method. This finally got me going! HQL queries in Orchard HQL queries are a great feature in (N)Hibernate that allow you to write almost-SQL queries against your domain models. I won't go into further detail here, but be sure to digest that! Orchard's IContentManager interface contains the method HqlQuery() to generate a new HQL query. Unfortunately, there's almost no usage of this feature throughout the whole Orchard solution. So let me document here how I used the HqlQuery to retrieve some random entries from our DB: // retrieve count items of type "Entry" sorted randomly return contentManager.HqlQuery()     .ForType("Entry")     .OrderBy(alias => alias.ContentItem(), sort => sort.Random())     .Slice(0, count)     .Select(item => item.Id); And one more: // retrieve <count> older items filtered by some restrictions, sorted randomly return contentManager.HqlQuery()     .ForPart<PremiumPart>()     .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),            expr => expr.Eq("Active", true))     .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),            expr => expr.Lt("BookingDateTime", recentDateTime))     .OrderBy(alias => alias.ContentItem(), sort => sort.Random())     .Slice(0, count)     .Select(item => item.Id); Even with the source code at hand, thanks to Orchard's MIT license, the implementation of this API in the over 600 lines long DefaultHqlQuery is not always straight-forward to put into practice. Most of all I was missing a unit test suite that would show off some of the core features of this API and I'm honestly scratching my head of how someone could build such an API without unit tests! Random() uses newid() : monitor the query performance The above solution was easy enough to implement once I've got my head around Orchard's HQL query API. But be aware that this method uses the newid() approach (more here) and thus needs to a) generate a new id for each row in the given table and b) sort all of those ids to then retrieve the top N rows. Orchard has this detail neatly abstracted away in the ISqlStatementProvider implementation classes. Here's the relevant code from SqlServerStatementProvider (identical code is used for SqlCe): public string GetStatement(string command) {     switch (command) {         case "random":             return "newid()";     }     return null; } For completeness, here's the generated SQL from the first query above (with variable names shortened for better readability): select content.Id as col_0_0_ from Test_ContentItemVersionRecord content     inner join Test_ContentItemRecord itemRec         on content.ContentItemRecord_id = itemRec.Id     inner join Test_ContentTypeRecord typeRec         on itemRec.ContentType_id = typeRec.Id where ( typeRec.Name in ('Entry') )     and content.Published = 1 order by newid() OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY This approach works well enough on small data sets but may become a problem if your data grows. So please keep a constant eye on all your random queries' performance. Happy HQL-ing!

IRIs and URIs; or: Internet Explorer does not decode encoded non-ASCII characters in its address bar

by Oliver 24. October 2013 23:03

Some facts about IE and its address bar IE can display non-ASCII characters in the address bar if you put them there by hand or click a link that contains such in unencoded form, e.g. http://marinas.info/marina/fürther-wassersportclub. IE sends a request for the correctly encoded URL, which is http://marinas.info/marina/marina/f%C3%BCrther-wassersportclub. Now, if you're in IE and click on the second link above, IE will not decode the URL back to the unencoded version – it will just keep the encoded URL in the address bar. If, instead, you're reading this page in FF or Chrome, the encoded URL above will be gracefully decoded into its unencoded counterpart. URIs and IRIs Disclaimer First off, let me tell you that I'm by no means an expert in this field. I'm trying to get my around URIs, IRIs, encodings and beautiful web sites and URLs just like probably half of the web developer world out there. So please, verify what you read here and correct me where I am mistaken. What the RFCs have to say By today, more than a handful of RFC documents have been published concerning URIs: RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax, which is the current Internet Standard (using IETF vocabulary) on URIs it updates RFC 1738, and it obsoletes RFCs 2732, 2396, and 1808 RFC 3987 - Internationalized Resource Identifiers (IRIs), which adds Unicode support for resource identifiers RFC 3986 states the following about a URI: A URI is an identifier consisting of a sequence of characters matching the syntax rule named <URI> in Section 3. See the examples section, or refer to Appendix A for the ABNF for URIs. RFC 3987 states the following about an IRI: An IRI is a sequence of characters from the Universal Character Set (Unicode/ISO 10646). In short, IRIs may contain Unicode characters while URI must not. Moreover, every URI is a valid IRI and every IRI can be encoded into a valid URI. Let's see an example again: IRI: http://marinas.info/marina/marina/fürther-wassersportclub URI: http://marinas.info/marina/marina/f%C3%BCrther-wassersportclub A great read on IRIs and their relationship to URIs can be found here by the W3C. Support for IRIs IRIs are not supported in HTTP as per RFC 2616. This implies that before requesting a resource identified by an IRI over HTTP it must be encoded as a URI first. This is what all mainstream browsers seem to do correctly – when you click on http://marinas.info/marina/marina/fürther-wassersportclub and inspect the request sent from your browser you will see that it actually requests http://marinas.info/marina/marina/f%C3%BCrther-wassersportclub. HTML5 support IRIs as URLs: http://www.w3.org/html/wg/drafts/html/CR/infrastructure.html#urls. Use IRIs today It looks like you can safely use IRIs in your HTML pages today already. And doing so will actually persuade IE into displaying the correct non-ASCII characters. So why don't we?

Setting up NGINX as Reverse Proxy for Camping.Info

by Oliver 24. June 2013 23:08

worker_processes 1; # http://nginx.org/en/docs/ngx_core_module.html#worker_processes Thanks to http://www.theunixtips.com/how-to-find-number-of-cpus-on-unix-system for helping me find out how many processor cores our VPS is using. Fixing the painful "24: Too many open files" error in NGINX When this error appears in your /var/etc/nginx/error.log file, you really want to do something about it. It basically tells you that some of your users are not being served content but instead receiving HTTP 500 errors generated by NGINX. Not a good thing! To investigate the open file descriptors on your linux (we're running NGINX on Ubuntu 12.04), I followed advice from this post: Linux: Find Out How Many File Descriptors Are Being Used. Then, to fix the low limits I found, 1024 and 4096 for the soft and hard limits, respectively, the post Set ulimit parameters on ubuntu provided a good walkthrough to changing those limits persistently, i.e. even after a reboot. But I somehow had the feeling that in case of NGINX there had to be a simpler solution. Turns out, there is. Let me introduce you to: worker_rlimit_nofile This thread in the NGINX forums contained the hint I needed: Re: Handling nginx's too many open files even I have the correct ulimit. I had actually posted to that thread before and received the helpful answer over a month ago, but I somehow hadn't got around to implementing it. Until today. So here's what you need to do: Set the worker_rlimit_nofile (in the main context) to something significant (I used 65536 = 2^16) instead of the default 1024 as soft and 4096 as the hard limit. Also set worker_connections to the same number to allow for a lot of parallel connections, both from clients and to upstream servers. Your nginx.conf file should now contain these lines: 1: user www-data; 2: worker_processes 1; 3: worker_rlimit_nofile 65536; 4: pid /var/run/nginx.pid; 5:  6: events { 7:     worker_connections 65536; ## default: 1024 8: # multi_accept on; 9: } Reload NGINX after configuration change In a shell, tell NGINX to reload the configuration. You don't need to restart the whole process to get a configuration change live. Just run: /usr/sbin/nginx -s reload or service nginx reload Now enjoy thousands and thousands of parallel connections to your proxy server :-)

Optimize Images for Your Website

by Oliver 17. June 2013 13:59

This is just a short post to draw your attention to a sweet tool I've just discovered: PNGGauntlet. It runs on Windows using the .NET 4 framework and is as easy to use as you could possibly wish. Also: it's completely free to use. Convert Your Existing PNGs For starters, we'll just convert some existing PNGs – can't really do any harm with that. In the Open File dialog, there's an option to filter for only .png files. You can choose many of them at once: If you provide an Output directory, the optimized files will be written to that destination. But: the tool also has the option to overwrite the original files, which is awesome if you use some kind of source control (and thus have a backup) and just want to get the job done. During my first run, using the 8 processing threads my CPU has to offer, … … I got savings from 3% to 27%: PNGGauntlet also tells me, that in total I saved 4,52 KB. If those were all images on your web site, that would be a not so bad improvement, especially when you get it investing about 2 min of your time and no extra expenses! Real Savings Running PNGGauntlet on the sprites that we use for Camping.Info, we were really surprised: out of 172 KB it saved us over 31%, a whole 54 KB! Now that's an improvement that on a slightly slower connection will already be noticeable. We'll definitely check the rest of our images for more savings. Convert Other Image Formats You can also choose to change your images format to PNG if you're in the mood. I tried converting all GIFs in the Orchard CMS Admin theme to PNGs and went from a total of 24 KB for 20 files to less than 17 KB with no loss of quality – an over 30% saving! Just beware that you'll need to change the file references in your project to pick up the new PNGs. Roundup Easy, fast and cheap (as in free) image optimization doesn't have to be magic anymore – today anyone can do it. Check out PNGGauntlet to try for yourself. There's really no excuse not to!

Orchard Harvest: Responsive Design

by Oliver 13. June 2013 15:30

These are just quick notes from one of the sessions at the Orchard Harvest conference currently taking place in Amsterdam. Fluid Grids bootstrap-responsive SimpleGrid 1140Grid FooTable hides columns, shows content in expandible elements Fluid Fonts FitText: fittextjs.com makes text fit into header elements SlabText adjusts text sizes to make page look beautiful Toggle navigation using Twitter bootstrap: put .nav-collapse and .collapse on the nav element Frameworks Foundation (uses SASS) INK Pure by Yahoo: new, small - 5KB

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.