.NET Developer Days 2017: Adding History to CRUD – Dino Esposito

by Oliver 29. November 2017 09:00

What's the problem with CRUD? Every time you update an existing table record, you automatically lose track of the state it had before. Possible solution using SQL Server 2016+ From the Microsoft Docs [emphasis mine]: SQL Server 2016 introduced support for system-versioned temporal tables as a database feature that brings built-in support for providing information about data stored in the table at any point in time rather than only the data that is correct at the current moment in time. Temporal is a database feature that was introduced in ANSI SQL 2011. This is really great stuff. The docs on the topic are well written, so head over there to dive deeper into this very interesting feature. Orchard CMS has versioning built-in This might be a little known fact, but the ASP.NET MVC based Orchard CMS – which is the engine that powers discoverize – has had support for versioned content for many years now. Just derive your record model class from ContentPartVersionRecord and you're set. And that's all for now. Happy versioning!

Delete a Large Number of Rows from a Table in SQL Server

by Oliver 28. May 2014 12:09

Recently, we had to make some space available in one of our SQL Express instances that was getting close to its 10 GB limit of stored data, so I set out to delete some old data from two of our largest tables. One contained about half a million rows, the other a bit over 21 million. Simple Deletion Would Take… Forever The simplest sql statement to delete all rows that were created before 2012 would be the following: DELETE FROM [dbo].[Message] WHERE DateCreated < '20120101' I can't even tell you how long this took because at 14 minutes I just cancelled the query execution (which took another 7 minutes to finish). This was the table with less than 500,000 rows where we wanted to delete a bit more than 200,000 rows. Break Delete Operation Into Chunks Searching for a solution to the problem, I came across this blog post on breaking large delete operations into chunks. It shows in good detail how the simple version above behaves against running a loop of a few tens of thousand deletes per iteration. An interesting aspect I hadn't thought of at that point was the transaction log growth that can become a problem with large delete operations. Running a loop allows you to do a log backup (in full recovery mode) or a checkpoint (in simple mode) at the end of each iteration so that the log will grow much more slowly. Unfortunately, though, this didn't help with the execution time of the delete itself, as you can also see from the graphs presented in above post. Disable Those Indexes! It turns out, our [Message] table had six non-clustered indexes on them which all had to be written to for every row that was deleted. Even if those operations are fast, their processing time will add up over a few hundred thousand iterations. So let's turn them off! In fact, let's turn only those off that won't be used during out delete query. [We have one index on the DateCreated column which will be helpful during execution.] This stackoverflow answer shows how to create some dynamic SQL to disable all non-clustered indexex in a database. I've modified it slightly to disable only indexes of a given table: Disable/Enable Table Indexes DECLARE @table AS VARCHAR(MAX) = 'Message'; DECLARE @sqlDisable AS VARCHAR(MAX) = ''; DECLARE @sqlEnable AS VARCHAR(MAX) = '';   SELECT     @sqlDisable = @sqlDisable + 'ALTER INDEX ' + idx.name + ' ON '                     + obj.name + ' DISABLE;' + CHAR(13) + CHAR(10),     @sqlEnable = @sqlEnable + 'ALTER INDEX ' + idx.name + ' ON '                     + obj.name + ' REBUILD;' + CHAR(13) + CHAR(10) FROM sys.indexes idx JOIN sys.objects obj     ON idx.object_id = obj.object_id WHERE idx.type_desc = 'NONCLUSTERED'     AND obj.type_desc = 'USER_TABLE'     AND obj.name = @table;   RAISERROR(@sqlDisable, 0, 1) WITH NOWAIT; RAISERROR(@sqlEnable, 0, 1) WITH NOWAIT; --EXEC(@sqlDisable); --EXEC(@sqlEnable); Now, with those indexes disabled, the simple delete operation took a lot less than a minute! Even in the case of our 21 million rows table, deleting 7 million rows took only 1:02 on my machine. Of course, after deleting the unwanted rows, you need to re-enable the indexes again which took another minute, but all in all I'm happy with the result. Copy Data to New Table and Drop Old Table One other way of deleting rows that I've used in combination with changing the table schema at the same time is the following: use a temporary table into which you copy all the rows you want to keep (the schema of which I modified to meet our new needs) delete the original table rename the temporary table to the original table's name recreate all indexes you had defined before This is basically what SSMS generates for you when you change the schema of a table, except for the indexes – you have to recreate them yourself. As you can imagine, this approach becomes faster and creates smaller transaction log footprint with a growing amount of data to delete. It won't have any benefit if you delete less than half of the table's rows. Choose the right tool for the job There are quite a few other approaches and tips out there on how to speed up your deletion process. It depends a lot on your concrete situation which of those will actually help you get your deletion job done faster. I had to experiment quite a bit to find the sweet spot but now that I've seen a few approaches I'm able to take a better decision in the future.

Retrieving random content items (rows) from a SQL database in Orchard with HQL queries

by Oliver 22. February 2014 12:37

We're adding some Premium functionality to discoverize right now, and part of that is the so-called Premium block which is a showcase of six Premium entries. Now, choosing the right entries for that block is the interesting part: as long as we don't have six Premium entries to show, we want to fill up the left over space with some random entries that haven't booked our Premium feature, yet. Get random rows from SQL database There are plenty of articles and stackoverflow discussions on the topic of how to (quickly) retrieve some random rows from a SQL database. I wanted to get something to work simply and quickly, not necessarily high performance. Incorporating any kind of hand-crafted SQL query was really the last option since it would mean to get hold of an ISessionLocator instance to get at the underlying NHibernate ISession to then create a custom SQL query and execute it. Not my favorite path, really. Luckily, the IContentManager interface contains the method HqlQuery which returns an IHqlQuery containing these interesting details: /// <summary> /// Adds a join to a specific relationship. /// </summary> /// <param name="alias">An expression pointing to the joined relationship.</param> /// <param name="order">An order expression.</param> IHqlQuery OrderBy(Action<IAliasFactory> alias, Action<IHqlSortFactory> order); …and IHqlSortFactory contains a Random() method. This finally got me going! HQL queries in Orchard HQL queries are a great feature in (N)Hibernate that allow you to write almost-SQL queries against your domain models. I won't go into further detail here, but be sure to digest that! Orchard's IContentManager interface contains the method HqlQuery() to generate a new HQL query. Unfortunately, there's almost no usage of this feature throughout the whole Orchard solution. So let me document here how I used the HqlQuery to retrieve some random entries from our DB: // retrieve count items of type "Entry" sorted randomly return contentManager.HqlQuery()     .ForType("Entry")     .OrderBy(alias => alias.ContentItem(), sort => sort.Random())     .Slice(0, count)     .Select(item => item.Id); And one more: // retrieve <count> older items filtered by some restrictions, sorted randomly return contentManager.HqlQuery()     .ForPart<PremiumPart>()     .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),            expr => expr.Eq("Active", true))     .Where(alias => alias.ContentPartRecord<PremiumPartRecord>(),            expr => expr.Lt("BookingDateTime", recentDateTime))     .OrderBy(alias => alias.ContentItem(), sort => sort.Random())     .Slice(0, count)     .Select(item => item.Id); Even with the source code at hand, thanks to Orchard's MIT license, the implementation of this API in the over 600 lines long DefaultHqlQuery is not always straight-forward to put into practice. Most of all I was missing a unit test suite that would show off some of the core features of this API and I'm honestly scratching my head of how someone could build such an API without unit tests! Random() uses newid() : monitor the query performance The above solution was easy enough to implement once I've got my head around Orchard's HQL query API. But be aware that this method uses the newid() approach (more here) and thus needs to a) generate a new id for each row in the given table and b) sort all of those ids to then retrieve the top N rows. Orchard has this detail neatly abstracted away in the ISqlStatementProvider implementation classes. Here's the relevant code from SqlServerStatementProvider (identical code is used for SqlCe): public string GetStatement(string command) {     switch (command) {         case "random":             return "newid()";     }     return null; } For completeness, here's the generated SQL from the first query above (with variable names shortened for better readability): select content.Id as col_0_0_ from Test_ContentItemVersionRecord content     inner join Test_ContentItemRecord itemRec         on content.ContentItemRecord_id = itemRec.Id     inner join Test_ContentTypeRecord typeRec         on itemRec.ContentType_id = typeRec.Id where ( typeRec.Name in ('Entry') )     and content.Published = 1 order by newid() OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY This approach works well enough on small data sets but may become a problem if your data grows. So please keep a constant eye on all your random queries' performance. Happy HQL-ing!

Unexpected EOF encountered in BCP data-file

by Oliver 15. September 2011 17:05

Today, I tried importing a CSV file like the following into one of our MS SQL Server databases: Id;Latitude;Longitude 4610;43.7119;-1.0737 5502;49.4297;-1.806 11360;46.9343;-1.8875 I tried it using the following command line: 1: bcp GeoDataImport in geodata.csv -w -t; -T but that threw the mentioned error: “Unexpected EOF encountered in BCP data-file” cmd> bcp GeoDataImport in geodata.csv -w -t; -TStarting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][SQL Server Native Client 10.0]Unexpected EOF encountered in BCP data-file 0 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 1 I’ve had this problem before and somehow managed to remember that it might have something to do with the encoding of the file. So I opened it with Notepad++ where you can easily check and change the file encoding and found it was ANSI encoded: Well, the UCS-2 Little Endian encoding is what SQL Server expects as default encoding, so I changed the encoding, saved the file and imported it again – with success. UCS-2 might be something you rarely hear about – that’s because it’s been superseded by UTF-16 but in most situations they are pretty much identical (check out http://en.wikipedia.org/wiki/UTF-16/UCS-2 for more info). That’s all for now – happy coding!

Missing Index: CREATE NONCLUSTERED INDEX

by Oliver 17. August 2011 18:28

Lately, we encountered a problem with the speed of our search on www.camping.info for a certain set of search criteria. It sometimes used to take over a few seconds before the updated results were shown. This most likely seemed to be a problem with the database so I went to investigate the offending queries using the wonderful NHibernate Profiler. I found a very slow query: So I went and copied the long running query into a new query window in SSMS (SQL Server Management Studio). Since we use MS SQL Express on our production servers we don’t get the advanced database tuning advisor features of the full edition. Still, when you right click on any query window you’ll see the option “Include Actual Execution Plan” … [FULL]                                      [EXPRESS] … which already offers a lot of detail, as you can see in the following screenshot: When you right click on the execution plan you’ll see the option “Missing Index Details…” - there you get a CREATE INDEX statement that is ready to use once you give a name to the new index. I did this for three indexes and now we have this for the same query: 115 ms instead of 1993 ms – that’s an improvement of 94%! Even if DB queries lasting longer than 50-60 ms are not really fast anymore, we’ve still got quite an improvement here. Well, that’s it. Using a well-priced tool like NHibernate Profiler to identify slow queries and the Execution Plan feature of SSMS, we’re able to get quite a performance improvement in a short time. Happy Coding and Optimizing, Oliver

SQL Server “Auto-Close” führt zu Time Outs & I/O Stress

by robert 23. March 2010 17:14

Gerade erreichte uns eine Monitoring SMS weil eine Kundenseiten Down war. Ein kurze Überprüfung konnte das nicht bestätigen, jedoch der Event-Log: Die ASP.NET Fehlerdetails sagen: Exception information:     Exception type: SqlException     Exception message: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. Doch wie kann das sein? Auch wenn eine Vielzahl von Seiten auf dem Server eingerichtet sind, ist der Dual-Quadcore mit 8 Gigabyte RAM kaum unter Last. Aufschluss aus welcher Richtung das Problem kommt, geben die Vielzahl von MSSQL$EXPRESS Events: “Starting up database 'XY'.” Der Grund für das häufige Neustarten der Datenbank ist so erklären: Einige Kundeninstallation werden selten verwendet. Die Einstellung “Auto-Close” … (Zeigt: “Database Properties”, rechte Maustaste auf die DB) … wird der SQL-Server dazu veranlasst die Datenbank “herunterzufahren” und Resourcen frei zu geben. Das geschieht wenn der letzte Benutzer sich vom SQL Server abmeldet, in diesem Fall also wenn der Connection-Pool geschlossen wird, was geschieht wenn die Webseite herunterfährt, also der Worker-Process für einen Web-Application-Pool beendet wird,   (IIS Connection-Pool Properties) per Default nach 20 Minuten. Durch unser Monitoring  werden jedoch in einem wiederkehrendem Intervall (@Mark, in welchem genau?) viele Seiten relativ zeitgleich aufgerufen und dadurch “neu gestartet” was dazu führt, dass viele Application-Pools und Datenbanken wieder “hochfahren” müssen. Die aktuelle Theorie ist, das dass I/O Stress verusacht und zum obigen Fehler führt. Selten (weniger als einmal im Monat) kommt es dann zu dieser Fehlermeldung: “SQL Server has encountered 1 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file”, die eine Fehler SMS zur Folge hat und die nicht mehr auftreten sollte wenn der “Idel Time-out verlängert” wird und das SQL Server Close abgestellt ist.

T-SQL StartsWith

by robert 28. May 2009 19:26

Schade das der MS-SqlServer von Hause aus nicht syntaktisch schönere Abfragen unterstützt. Insbesondere CAST ist umständlich und Beispielsweise die DATETIME Funktionen weder bequem noch besonders vollständig. Ich wünsche mir IntelliSense-freundliche Syntax, die Spalten/Projektionen als Objekt auffasst und Typabhängig eine Reihe von Funktionen zulässt: SELECT * FROM SomeTable WHERE Name.StartsWith(‘a’)   Hier das ganze in gültigem SQL SELECT * FROM SomeTable WHERE Name like 'a%' oder SELECT * FROM SomeTable WHERE SUBSTRING(Name, 0, 1 ) = SUBSTRING ( 'a', 0, 1)      

MS-SQL Server 2005, Instanz Alias

by admin 6. February 2008 00:20

Bei der Installation eines SQL-Servers bekommt die installierte Instanz einen eindeutigen Name, über den die Datenbank zu erreichen ist. Der Grund für den Instanz-Namen ist, das mehrere SQL-Server parallel betrieben werden können. Bei mir laufen zum Beispiel SQL-Server 2005, eine Express Edition und vor noch ein paar Wochen auch ein SQL Server 2000. Wenn nun mehrere Entwickler an einem Projekt arbeiten, ergeben sich hieraus Probleme. Jeder trägt in seiner Konfiguration für die Datenbankverbindung seinen Instanz Namen, in der Form <rechnername>\<instanz-name>, ein. Ein weg damit umzugehen ist es hiefür eine seperate Konfigurationsdatei anzulegen und diese nicht in das Vesionkontrollsystem aufzunehmen. Alternativ lässt sich aber für eine Instanz auch eine Instanz-Alias einrichten. Der einfachste Weg dorthin: 1:) "Windows Taste" -> "Run" und dann "compmgmt.msc" eingeben. 2:) Dann im Computer Management Aliase auswählen .. 3:) .. und schließlich den Alias erstellen ..   .. und die Arbeit ist getan. Der connection string kann dann so aussehen. Server=LocalMssql;Database=foo;uid=bar;pwd=whatever; Jetzt müssen sich noch alle Entwickler auf den gleichen Datenbankbenutzer und Passwort einigen und vielen commits der veränderten Konfigurationsdatei steht nichts mehr im Wege. (Bei der Installation eines einzigen Server ist es möglich im connection string statt des Instanz-Namens "(local)" zu verwenden, aber das fällt bei mehreren Instanzen weg.)

Skalierbares Windows-Hosting

by admin 7. October 2007 03:44

Es geschah an einen Dienstag. Die neue Buzzword-Ready Web 2.0 App war am Freitag zuvor nach 4 Monaten harter Arbeit online gegangen. Zur Feier des Tages wurde abends beim Italiener eine Flasche Wein geordert. Es war der erste freie Abend nach vielen Wochen. Der Wein wirkte sofort, für untrainierte ist ein Rausch preiswerter und preiswert ist gut, wenn man sich darauf einstellt ein junges Produkt langsam am Mark zu etablieren und jeder ausgegebene Euro der Firma gehört, nicht einer Bank oder einem Venture Capitalisten. Die ersten 4 Tage waren ruhig verlaufen. Wie erwartet registrierten sich nur wenige Benutzer, denn trotz automatisierter Test und einer ausführlichen Analyse wie sich das Baby unter Last verhalten würde, sollte sich die Anwendung im Live-Betrieb erstmal bewähren. Es war tägliche Routine, etwas gelangweilt arbeitete einer der Entwickler die letzten Tage von Mashable und Techcrunch auf. Keine großen Highlights, das übliche. Die meisten der 275 ungelesenen Posts landeten nach einem kurzen Blick auf den Betreff und Autor im Papierkorb. Genau wie „Special Interest Trading Platform Launches“ … moment … „Special Interest Trading Platform“  ... (??!) ... STRG-Z beförderte den gerade gesehenen Eintrag aus dem Papierkorb wieder zurück in die aktuelle Ansicht. Es war 23:45Uhr. Von den 900 000 Techchruch Lesern hatten 14 325 die junge Seite innerhalb der ersten 30 Minuten aufgerufen und 1500 neue „Special Interest Trading Platforms“ aufgesetzt. Ca. 15-20 davon waren wohl mehr als ein Test. Der Web-Server schnaufte, hielt sich aber mit 25% Prozessorleistung noch ganz gut, der Datenbankserver lief mit 80-90% Auslastung und war am Limit. Um 1:17 hatte Mashable und wohl 3 Dutzend anderer Blogger das Techcrunch Post aufgegriffen und Ihrerseits über die neue Handelsplatform berichtet. Ab 1:26 lies sich der Web-Server nicht mehr via http erreichen. Mit sehr viel Geduld gelang es dann letztlich um 3:40 die Anwendung durch eine freundlich Dankesseite ersetzen. Die nächsten Wochen versprachen wunderbar zu werden. ...................................................................... Okay genug feuchte Träume. Flexiscale bietet skalierbares Hosting. Das Versprechen ist, das man innerhalb von wenigen Minuten von einem Rechner auf einige tausend Rechner aufrüsten kann. Doch statt wie Amazon EC2/S3 nur Linux zu unterstützen, bietet Flexiscale Windows 2003 Server und Windows SBS an. Windows SBS ist insbesondere interessant, da es eine MS-SQL Server Lizenz bundelt. (Wobei man die sich genaue Lizenz des SBS noch mal genauer anschauen müsste um zu entscheiden ob sich Windows-SBS auch legal für Hosting einsetzen lässt.) Die Preise sind absolut konkurrenzfähig. Wir werden uns das definitiv genauer anschauen. Ich kann kaum erwarten den Zugang zu erhalten :-) [via Techchrunch]

Weitere "Group By" Variation

by admin 5. September 2007 04:01

Hier nochmal eine Variante die sich nur geringfügig von Variante 1, des vorherigen Posts unterscheidet:   Intererssannterweise Logischerweise ist der ExecutionPlan geringfügig anders. Wobei ich Ad hoc keine Aussagen darüber treffen würde, was mit echten und optimierten Daten schneller sein würde. Hier gibt es noch eine interessanten Artikel darüber was "Query Cost (relative to the batch)" bedeutet.

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.