by Oliver
27. June 2012 14:40
This is only the beginning… Batch String replacement: http://ss64.com/nt/syntax-replace.html Date formatting T-SQL, using CONVERT or CAST: http://msdn.microsoft.com/en-us/library/ms187928.aspx .NET: Custom Date and Time Format Strings, Standard Date and Time Format Strings Windows Open recent window from taskbar when clicking on icon instead of showing list of open windows: http://www.howtogeek.com/howto/16334/make-the-taskbar-buttons-switch-to-the-last-active-window-in-windows-7/ Logging Filters for log4net: http://www.claassen.net/geek/blog/2009/06/log4net-filtering-by-logger.html
by Oliver
13. March 2012 01:37
Finally, with an update we rolled out last week, (almost) all tooltips on Camping.Info look and behave similar, differing mostly in positioning and size, but not in the general look and feel. We chose the jQuery Tools Tooltip as the base for our own solution and it got us pretty far, but there were some pitfalls and scenarios that we needed to handle ourselves. This post is about what limitations we experienced and how we dealt with them. The original As you can read in the jQuery Tools Tooltip documentation, the tooltip plugin is highly configurable. It can take different elements as the tooltip for any given trigger: the value of title attribute of the trigger element the html element immediately following the trigger the html element immediately following the first parent of the trigger an arbitrary html element on the page. You can also position the tooltip pretty much wherever you want relatively to the trigger: Our adaptations Another way to chose the tooltip We found one more useful way to define the tooltip for a trigger element: if the trigger is e.g. a table cell in an html table and you don’t want to specify a static tooltip for some or all table cells but a different one for each cell or at least a number of cells, it makes sense to define the tooltip element inside the trigger (the table cell). Since this effect was not achievable extending the jQuery Tools Tooltip plugin we started changing their source: Breaking the tooltip out of some parent container We also faced some problem properly showing tooltips that needed to “break out” of some bounding box, inside which they were defined (i.e. their html markup). This problem e.g. occurred inside elements with style position: relative, which we have a few of on Camping.Info. Our first attempt was to clone the tooltip element and show the clone instead of the original. This worked in almost all cases – until we tried to attach some more behavior to elements inside the tooltip. The behavior, e.g. some click event handler that we expected to fire when clicking on some element inside the tooltip, wouldn’t execute, since we were working with the clone! So we decided to simply move the tooltip up in the DOM tree for the time it is being shown, more precisely just beneath the form tag that we have on all our pages. We create a placeholder at the place where we removed the tooltip to reinsert it again once it’s being hidden. The code we added to the show() method looks like this: … and here’s the counterpart in hide(): Now, this works quite well everywhere, independent of the position of the tooltip in the DOM tree. Global tooltip configuration Using inline static configuration One feature we quickly missed was some kind of static tooltip configuration without calling $(...).tooltip({ settings: ... }) for every single tooltip we wanted to create or hook up, respectively. What we came up with is to use the HTML5 data attributes to define any specific configuration statically inside the trigger element’s html markup. Thus, we need to call the tooltip initialization code only once for the whole page. The configuration now looks like this: We use specific prefixes with the data attributes to make them easier to understand, e.g. data-tt-position for an attribute that is used for tooltips and jq-tt-trigger for a class that is used by some jQuery code for tooltips. To process this kind of static configuration we need some custom code that will, at some point, call the original (well, modified by now) plugin code. Unfortunately, the jQuery Tools Tooltip plugin was not designed to allow runtime configuration of the tooltip to show, but we found a way using the onBeforeShow and onHide event handlers. The basic idea is to change the global Tooltip configuration during the first method so that the tooltip we will be showing will be configured correctly, and to reset the global configuration once the tooltip has been hidden again. To achieve this, we iterate over all configuration properties that the jQuery Tools Tooltip plugin supports and search for the respective data attributes on the currently processes trigger element. One example would be the position property: to replace the default value provided by the plugin we look for an attribute that’s called data-tt-position and use its value to temporarily overwrite the default value during the onBeforeShow event handler. Using global profiles Once we had the static configuration working and started to replace all of those clumsy and overly complicated AjaxControlToolkit HoverMenuExtenders, it quickly turned out that we were copy’n’pasting the same configuration in a thousand places. This was not only ugly and violated the DRY principle, it also lead to some unnecessarily bloated html. As a solution to this maintenance nightmare we came up with profiles that comprise a set of configuration options that would else be repeated over and over again: Now, they lead to some really clean html markup: The only change from using the inline static configuration is to use the profile’s properties – everything else stays the same! Conclusion The jQuery Tools Tooltip plugin is a nice, small and highly configurable tool for easy tooltip creation and usage. In a larger web application there a few shortcomings that we’ve addressed here and which we’ve provided working solutions for. We hope to release those changes soon in its own project on our GitHub account. Happy coding!
by Oliver
16. September 2011 20:06
Lately, I was having trouble debugging certain parts of my code in Visual Studio, and all I wanted to know was the value of some variable at some point in time. Well, I’d use some logging if I could just get at that value easily. But for some objects I don’t really know what I’m looking for or where I should be looking for it. So just give me the values of all the members of that object, will ya? And could you recurse that? But no deeper than 3 levels, alright? Or let’s say… 5? public static string ToDebugString(this object obj, int maxdepth, int depth=0)
{
if (obj == null)
return "null";
if (obj is IConvertible)
return obj.ToString();
if (depth >= maxdepth)
return "...";
var sb = new StringBuilder();
if (depth > 0)
sb.AppendLine();
foreach (var propertyInfo in obj.GetType().GetProperties(BindingFlags.Public|BindingFlags.Instance))
{
sb.Append(new string(' ', 2*depth)).Append(propertyInfo.Name).Append(": ");
try
{
var value = propertyInfo.GetValue(obj, new object[0]);
sb.AppendLine(ToDebugString(value, maxdepth, depth + 1));
}
catch (Exception ex)
{
sb.AppendLine(string.Format("[{0}]", ex.Message));
}
}
// remove newline from end of string
var newLine = Environment.NewLine;
if (sb.Length >= newLine.Length)
sb.Replace(newLine, "", sb.Length - newLine.Length, newLine.Length);
return sb.ToString();
}
With this little helper I can now simply call anyobject.ToDebugString(4 /* maxdepth */) and I get a nicely formatted debug view of that object; e.g. Request.Url.ToDebugString(3) gives me:
AbsolutePath: /logg.aspxAbsoluteUri: http://localhost:55235/logg.aspxAuthority: localhost:55235Host: localhostHostNameType: DnsIsDefaultPort: FalseIsFile: FalseIsLoopback: TrueIsUnc: FalseLocalPath: /logg.aspxPathAndQuery: /logg.aspxPort: 55235Query: Fragment: Scheme: httpOriginalString: http://localhost:55235/logg.aspxDnsSafeHost: localhostIsAbsoluteUri: TrueSegments: Length: 2 LongLength: 2 Rank: 1 SyncRoot: Length: 2 LongLength: 2 Rank: 1 SyncRoot: ... IsReadOnly: False IsFixedSize: True IsSynchronized: False IsReadOnly: False IsFixedSize: True IsSynchronized: FalseUserEscaped: FalseUserInfo:
Nice
Right now this method chokes on indexed properties but once I’ll need it I’ll go and look for a way to include them. It also chokes any exceptions on the way to just get the job done.
Happy coding!
by Oliver
6. September 2011 22:18
Recently, we encountered a quite surprising behavior of MSBuild – the continuous integration build of our new collaborative Todo Management app (we hope to go into beta soon!) would produce a broken version whereas the local build with VS2010 was all smooth and well. Our admin and tester already posted about this problem over at his blog: MSBuild does not build like Visual Studio 2010. The exception message finally led me down the right path: Server Error in '/' Application. No constructors on type 'Teamaton.TodoCore.Repositories.TodoRepositoryJson' can be found with 'Public binding flags'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Autofac.Core.DependencyResolutionException: No constructors on type 'Teamaton.TodoCore.Repositories.TodoRepositoryJson' can be found with 'Public binding flags'. The TodoRepositoryJson is a type we used at the very beginning of our development to quickly get started using a JSON document as data store. Later we switched to SQLite, so now we have another implementation: TodoRepositoryDb. Both implement the same interface ITodoRepository. Turns out, the Autofac type registering code was the culprit: 1: var builder = new ContainerBuilder();
2: builder.RegisterAssemblyTypes(typeof (Todo).Assembly)
3: .Where(t => t.Name.Contains("Repository"))
4: .AsImplementedInterfaces()
5: .InstancePerLifetimeScope();
What worked with Visual Studio, didn’t work with MSBuild: obviously – well, now it is – both ITodoRepository implementations were registered with Autofac, and while Autofac’s assembly scanning delivered them in the order we assumed from the DLL built with VS – first, TodoRepositoryJson, and second, TodoRepositoryDb, thus overriding the first registration – MSBuild seems to build a DLL which returns the inverse order! Very strange.
Honestly, I’m not familiar with the anatomy of DLLs and surprised by this result. But it’s the only explanation I’ve found so far.
Well, the solution to the problem is, of course, to care more about what we register with Autofac and in which order.
Happy coding,
Oliver
by Anton
22. July 2011 13:37
Bantam Is Quitting Services We as teamaton were using bantam for all of our todos. At the beginning of this year bantam was bought by ConstantContact, and they announced that bantam will cease services as of July 1. Since we are developing our own todo management tool (see our blog), we decided to push the development and use it instead of bantam. Of course we wanted to take all of our todos with us. We used bantams export feature which gave us a JSON-file with all our tasks (closed and open ones). So I took on the task to write an JSON import feature into our tool. Json.NET After a bit of researching, I found that the library Json.NET would suit our import needs perfectly. Applying the deserialization was pretty straightforward – the documentation helped a lot. Here is the code from the Import controller: [HttpPost]
public ActionResult Import(HttpPostedFileBase file)
{
var todos = new List<Todo>();
if (file != null && file.ContentLength > 0)
{
var streamReader = new StreamReader(file.InputStream);
string text = streamReader.ReadToEnd();
streamReader.Close();
var bantamTodos = JsonConvert.DeserializeObject<IList<BantamTodo>>(text) as List<BantamTodo>;
todos = bantamTodos.Select(bantamTodo => bantamTodo.ConvertToTodo()).ToList();
_todoRepository.SaveImport(todos);
}
return RedirectToAction("List");
}
It just opens the file, extracts the content as a string, deserializes the string into a list of bantam todos, and then converts these bantam todos into our “normal” todos.
Indirection Via BantamTodo-Class
As you can see, I did not convert the JSON directly into our Todo-class. You can use attributes and the converter class to deserialize JSON into a class of your liking. There are two reasons, why I did not choose to do so: I did not want to load the Todo-class with attributes and converters, and I thought it would be easier to introduce a middle class (BantamTodo), which poses as a container and converter.
I used a nice tool, to take a good look into the original JSON-file: JSON Viewer.
With the information about the structure of the JSON file I started implementing via the TDD pattern. Here is my test class, which tests the deserialization of the the bantam todos and the conversion from the class BantamTodo to Todo:
[Test]
public void Should_Import_BantamToDo_FromJson()
{
var jsonToDo = ArrangeTaskAsJson();
var bantamToDo = JsonConvert.DeserializeObject<BantamTodo>(jsonToDo);
bantamToDo.Categoy.Should().Be.EqualTo("Organisation");
bantamToDo.Complete.Should().Be.EqualTo(true);
bantamToDo.Created_At.Should().Be.EqualTo(new DateTime(2011, 6, 30, 0, 41, 57));
bantamToDo.Due.Should().Be.EqualTo(new DateTime(2011, 7, 1));
bantamToDo.Author.Name.Should().Be.EqualTo("Anton");
bantamToDo.Assigned_To.Name.Should().Be.EqualTo("Oliver");
bantamToDo.Related_To[0].Name.Should().Be.EqualTo("ToDo-Management Tool");
bantamToDo.Name.Should().Be.EqualTo("Entwicklung nach Gebieten Personen zuordnen - Verantwortliche, Blogs, etc.");
bantamToDo.Description.Should().Be.EqualTo("some good description");
bantamToDo.Flagged.Should().Be.EqualTo(true);
}
[Test]
public void Should_Convert_BantamToDo_ToTodo()
{
var jsonToDo = ArrangeTaskAsJson();
var bantamToDo = JsonConvert.DeserializeObject<BantamTodo>(jsonToDo);
var todo = bantamToDo.ConvertToTodo();
todo.Status.Should().Be.EqualTo(bantamToDo.Complete ? Status.Closed : Status.Open);
todo.Description.Should().Contain(bantamToDo.Name);
todo.Description.Should().Contain(bantamToDo.Description);
todo.Tags.Select(t => t.Name).Should().Contain(bantamToDo.Categoy);
foreach (var bantamProject in bantamToDo.Related_To)
todo.Tags.Select(t => t.Name).Should().Contain(bantamProject.Name);
todo.DateCreated.Should().Be.EqualTo(bantamToDo.Created_At);
todo.DateCompleted.Value.Date.Should().Be.EqualTo(bantamToDo.Due);
todo.DateDue.Should().Be.EqualTo(bantamToDo.Due);
todo.Creator.Name.Should().Be.EqualTo(bantamToDo.Author.Name);
todo.Assignee.Name.Should().Be.EqualTo(bantamToDo.Assigned_To.Name);
todo.Priority.Value.Should().Be.EqualTo(bantamToDo.Flagged ? 2 : 0);
}
The implementation was pretty straightforward. Since it was my first time working with MVC, and also my first time working with JSON, it took me some time. All in all – research, export and meetings included – it took me about 12 hours.
If you have any suggestions as to improvement I would appreciate them. If you are trying to import JSON into .NET yourself, I hope that this article helps.
by Oliver
15. July 2011 09:07
For our portal software discoverize I was looking for a way to create new modules faster and more reliably. The basic structure would always be the same, so a Visual Studio multi-file template seemed appropriate: Well, unfortunately I didn’t find a way to create new folders with that approach. Multi-file templates really do what they say: they create multiple files from templates. Nothing else. So I put together a short batch script that would create the directory structure needed for any new module: I can quickly open a new command line window by using any one of several Visual Studio extensions (e.g. PowerCommands for Visual Studio 2010): … and simply run: Now going back to Visual Studio we have to include the new Feature folder in the project: Then hit Ctrl + Shift + A to open the Add New Item dialog, select ‘Discoverize Module’ and type Feature in the Name textbox (unfortunately, there seems to be no easy way to automatically put the name of the folder inside that textbox): This step will generate three code files, that are the backbone of every module: FeatureConfig.cs, FeatureModule.cs, and FeatureViews.cs. Finally, our multi-file item template comes into play! Handling the multi-file template The multi-file item template for a new module consists of four files: the template definition file Module.vstemplate and the template code files Config.cs, Module.cs, and Views.cs: Those four files have to be packed into a zip file and copied to a folder underneath %UserProfile%\My Documents\Visual Studio 2010\Templates\ItemTemplates\ – I put this one into Visual C#\Code. That’s how it appeared under the Visual C# –> Code section in the Add New Item dialog. Since it is somewhat cumbersome to zip and copy updated versions of the template (especially during early development where I keep adjusting and tuning the template code), I put together another batch file that does that for me. It basically does three things: Get the name of current folder to use as name for the zip file (found the solution here) Use 7-zip to zip the four files. Copy the zip file to the VS custom template directory. The real script contains some safety nets and more output so that in case it won’t work across all developer machines I can get quick feedback as to what exactly didn’t work instead of just “it didn’t work”. Happy Coding!
by Oliver
28. June 2011 01:27
Three year old code: 1: protected string CpeBehaviorIds()
2: {
3: var cpeIds = "";
4:
5: var helpItems = GetHelpItems(divGlobal);
6:
7: foreach (var helpItem in helpItems)
8: cpeIds += helpItem.CollapsiblePanelBehaviorID + ',';
9:
10: // remove comma at end
11: if (cpeIds.Length > 0)
12: cpeIds = cpeIds.Remove(cpeIds.Length - 1);
13:
14: return cpeIds;
15: }
16:
17: protected string CpeExpandIds()
18: {
19: var cpeIds = "";
20:
21: var helpItems = GetHelpItems(divGlobal);
22:
23: foreach (var helpItem in helpItems)
24: cpeIds += helpItem.CollapsiblePanelExpandID + ',';
25:
26: // remove comma at end
27: if (cpeIds.Length > 0)
28: cpeIds = cpeIds.Remove(cpeIds.Length - 1);
29:
30: return cpeIds;
31: }
32:
33: protected static List<HelpItem> GetHelpItems(Control control)
34: {
35: var idList = new List<HelpItem>();
36:
37: if (control is HelpItem)
38: idList.Add(control as HelpItem);
39: else
40: foreach (Control child in control.Controls)
41: idList.AddRange(GetHelpItems(child));
42:
43: return idList;
44: }
New code:
1: protected string CpeBehaviorIds()
2: {
3: return divGlobal.Controls<HelpItem>().Select(h => h.CollapsiblePanelBehaviorID).JoinNonEmpty(",");
4: }
5:
6: protected string CpeExpandIds()
7: {
8: return divGlobal.Controls<HelpItem>().Select(h => h.CollapsiblePanelExpandID).JoinNonEmpty(",");
9: }
10:
11: public static string JoinNonEmpty(this IEnumerable<string> values, string separator)
12: {
13: return String.Join(separator, values.Where(s => !string.IsNullOrEmpty(s)).ToArray());
14: }
LINQ – we love you!
Oliver
P.S. Controls<Type>() is another extension method defined like this:
1: /// <summary>
2: /// Returns all controls of the given Type that are found inside this control.
3: /// Searches recursively.
4: /// </summary>
5: public static IEnumerable<T> Controls<T>(this Control control) where T : Control
6: {
7: var controls = control.Controls;
8:
9: if (controls.Count == 0) return new List<T>(0);
10:
11: var newColl = new HashedSet<T>();
12: foreach (Control child in controls)
13: {
14: if (child is T)
15: newColl.Add((T) child);
16:
17: var childColl = child.Controls<T>();
18: foreach (T ctrl in childColl)
19: newColl.Add(ctrl);
20: }
21:
22: return newColl;
23: }
by Oliver
24. June 2011 21:47
Imagine we have the following list of ids of some kind of objects and a mapping of some of the ids to some values (also int’s here): 1: var ids = new List<int> { 6, 2, 5, 3, 7 };
2: var valueMap = new List<int[]> { new[] { 5, 15 }, new[] { 2, 12 }, new[] { 3, 13 } };
Now, if we want to return the results a.k.a. the valueMap in the same order we have the ids, we can use the LINQ extension method .Join()like this:
1: var joined = ids.Join(valueMap, id => id, arr => arr[0], (id, arr) => string.Format("id: {0} - value: {1}", id, arr[1]));
Really convenient! Let’s look at the output:
1: Console.WriteLine(string.Join("\r\n", joined));
2:
3: // Prints:
4: // id: 2 - value: 12
5: // id: 5 - value: 15
6: // id: 3 - value: 13
By the way, I use FastSharp to write and test this kind of small code snippets
Happy Coding,
Oliver
by Oliver
25. March 2011 15:50
In my recent post Testing: trying to get it right I mentioned that a lot of our tests are of the dirty hybrid kind, somewhere between real Unit tests and real Integration tests. Focusing on the Unit test part, we’re looking into using a mocking framework right now to change the way we write tests – most of all to decouple the different components that we use in the application under test. Wanting to use the fresh and hype NuGet package manager to install the mocking frameworks, I chose among the ones that were both available there and also looked promising: RhinoMocks: the sample found in the introduction did not look at all inviting to me so I dropped this one Telerik JustMock (Free Edition) Moq FakeItEasy Really, it could not have been easier to get all these libraries into the project than using NuGet! Sample code So I went ahead and wrote a short and simple test just to get a feel of the syntax they offer. At first I wanted to mock a simple Repository using its interface. Here is what I ended up with: Telerik JustMock: using syntax from their quick start manual 1: public void MockRepositoryInterfaceTest()
2: {
3: // Arrange
4: var repo = Mock.Create<ITodoRepository>();
5: Mock.Arrange(() => repo.Todos)
6: .Returns(new List<Todo> {new Todo {Description = "my todo"}}.AsQueryable);
7:
8: // Act + Assert
9: repo.Todos.ToList().Count.Should().Be.EqualTo(1);
10: }
Moq: just visit the project homepage
1: public void MockRepositoryInterfaceTest()
2: {
3: // Arrange
4: var repo = new Mock<ITodoRepository>();
5: repo.SetupGet(rep => rep.Todos)
6: .Returns(() => new List<Todo> {new Todo {Description = "my todo"}}.AsQueryable());
7:
8: // Act + Assert
9: repo.Object.Todos.ToList().Count.Should().Be.EqualTo(1);
10: }
FakeItEasy: just visit the project homepage
1: public void MockRepositoryInterfaceTest()
2: {
3: // Arrange
4: var repo = A.Fake<ITodoRepository>();
5: A.CallTo(() => repo.Todos)
6: .Returns(new List<Todo> {new Todo {Description = "my todo"}}.AsQueryable());
7:
8: // Act + Assert
9: repo.Todos.ToList().Count.Should().Be.EqualTo(1);
10: }
In the first test-ride FakeItEasy and JustMock look pretty much identical, whereas the syntax Moq offers is a bit awkward with the SetupGet() method name and the need to call repo.Object to get the instance. I hope to examinate further differences in use as the project moves on.
Mocking concrete classes
Since we’re working with a large application that still has a lot of services and classes not implementing any interface I also wanted to make sure we’ll be able to mock concrete types. Well, this didn’t go so well:
JustMock and FakeItEasy simply returned an instance of the concrete class I gave them, Moq complains that it can’t override the Todos member. So I added the virtual modifier to it and the test is now green. Still, I got the impression that I was trying to do something that I shouldn’t. The following blog post motivates further not to mock concrete classes: Test Smell: Mocking concrete classes. So I guess introducing interfaces as a kind of contract between classes is the way to go, but in the meantime and where we can’t avoid mocking concrete types we’ll be left using Moq.
That’s it for now, happy coding!
Oliver
by Oliver
9. February 2011 10:54
Read a great post on Steve Sanderson’s blog with the promising title Writing Great Unit Tests – and it is definitely worth reading. He mentions another post with the title Integration Testing Your ASP.NET MVC Application which I also recommend. One of the eye openersfor me was this quote in his post: “TDD is a design process, not a testing process”. Let me elaborate: “TDD is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests.” I must admit that I haven’t read much yet about TDD – but we’ve been writing tests for quite some time now. Unfortunately, most of them probably fall into the Dirty Hybrids category that Sanderson sees between two good ends, one being True Unit Tests, the other being Integration Tests. I allow myself to add his illustration here: So it looks like our goal should be to write tests in either of the outer categories and slowly but surely get rid of the time consuming, easy-to-break hybrid tests. One problem that a lot of people writing web applications are confronted with at some point, is testing the whole application stack from browser action over server reaction to browser result. We’ve put some effort into abstracting away the HttpContext class to use the abstraction in both our frontend and in tests, but it falls short of being a worthy replacement for the real HttpRequest and HttpResponse classes. With all that works we’re missing a possibility to use our abstraction in a third party URL Rewriting engine, so the requests we are testing never get processed by it. We have tests for the rules that are applied by the engine, but for more sophisticated setups this simply is not enough. Thanks to a link on Steve Sanderson’s blog post on integration testing ASP.NET MVC applications I stumbled upon Phil Haack’s HttpSimulator – and it looks just like the piece in the puzzle we’ve been missing for all that time. (I have no idea how we didn’t find that earlier.) Another thing I’m new to is kata. Kata is a Japanese word describing detailed choreographed patterns of movements practiced either solo or in pairs. A code kata is an exercise in programming which helps hone your skills through practice and repetition. At first, it might sound weird to practice and repeat the same exercise over and over again. But then again, if we think of musicians or sportsmen it’s not hard to see that they become great at what they do only by practicing. A lot. And they practice the same move (let’s just call it that) over and over again. The idea behind the code kata is that programmers should do the same. This is what Dave Thomas, one of the authors of The Pragmatic Programmer, also promotes on his blog. I stumbled upon a very interesting kata in a blog post by Robert Martin about test driven development, which deals with the evolution of tests and poses the assumption that there might be a kind of priority for test code transformation that will lead to good code: The Transformation Priority Premise. The kata he mentions further down is the word wrap kata. The post is rather long so count at least 15 min to read it. For me it was well worth it. The solution to a very simple problem, at least a problem that’s easy to explain, can be quite challenging – and the post shows hands on how writing good tests can help you find a good solution faster. It showed me (again) that writing good tests lets you write good code. Just like the quote above states: TDD is a software development process. Happy coding – through testing ;-) Oliver