Rolling your own identities

I must say that I am not a database guy and my SQL skills are limited. But, it is not an acceptable excuse, specially when we are working in a small team and there is no dedicated database guy. We have no choice but to hone in our database skills and get ready to switch between the roles. That is exactly what I had to do when one of the system users reported an issue where database generated the same ID for 2 different records. I delved into to the SP and found what in the database world is called “Rolling your own identities”. It is a technique (hack or shortcut or whatever you want to call it) to generate ID numbers by getting the MAX of a column and adding 1 to it. It is a widely known solution in the community for situations when displaying a primary key column (with its seed and increment values set) on the client interface is not an option. For example if it is a GUID column or it has an out of sequence ID numbers due to frequent delete and insert operations.

So, this is what I saw in the SP, which is actually a way of Rolling your own identities.


BEGIN TRANSACTION 
/* select query */ 
SELECT TOP 1 @ItemNumber = ItemNumber + 1
FROM ItemTable
WHERE {some_condition}
ORDER BY ItemNumber DESC

/* insert query */
INSERT INTO ItemTable (ItemNumber, {column2}, {column3})
VALUES (@ItemNumber, {some_value}, {some_value})
END TRANSACTION

The /*select query*/ above can also be written using MAX a function.


SELECT MAX(ItemNumber) FROM ItemTable
WHERE {some_condition}

What exactly the problem is

If you notice, select and insert queries above are within a transaction which is important in a distributed environment where multiple clients access the database (or at least this SP) at the same time. I always believed that BEGIN TRANSACTION statement is the ultimate savior and creates a critical section internally, thus preventing 2 transactions from accessing the same database resource at the same time, like a lock statement in C#. If that was true, then why it allowed 2 different transactions to execute the select query concurrently thus allowing them to have the same max ItemNumber?

Research ensues

My first clue was the transactions isolation level. It is a keyword which controls the default locking behavior. The default transaction isolation level in SQL Server is Read Committed, which means only data committed by other transaction can be read hence avoids dirty read. But, this is not what I wanted.

What do you want then?

I wanted a way to bar other users from accessing select query until transaction is finished executing select *AND* insert statements both. I think I was wanting to convert my transaction into a truly atomic unit of work in order to make sure every transaction gets a new ItemNumber whenever it runs its select statement. Since Read Committed doesn’t serve the purpose, I decided on reading more about other transaction isolation levels and locking behaviors.

Read Uncommitted:

As explained in many places on the internet, By using this level one can read the data which has been read by other transactions but not yet committed. In this case no shared and exclusive locks will be honored and dirty read will not be prevented. It can also result in Phantom data or nonrepeatable reads. I certainly didn’t want this as I needed more restrictive locking not less. I moved onto Repeatable Read.

Repeatable Read:

This one took a bit long to get into my head. This isolation level will not allow other users to update the data that has been read in the select query. But, why do they call it repeatable? Actually within a SAME transaction we would like to issue the same SELECT statement multiple times.

Transaction 1

 
SELECT ItemNumber, ItemDetail FROM ItemTable
WHERE ItemNumber < 10

Transaction 2


SET ItemDetail = {some_new_value}
WHERE ItemNumber = 5

Transaction 1 continues...

   
SELECT ItemNumber, ItemDetail FROM ItemTable
WHERE ItemNumber < 10

In order to make sure Transaction 2 doesn’t update the records we have selected between multiple reads (means we may repeat our read later in the transaction), SQL Server will maintain a lock on all the rows we have read until the transaction ends. This is certainly more restrictive locking than Read Committed. We get the ownership of the rows we read till the end of the transaction.

Unfortunately, even Repeatable Read can’t help create the kind of critical section I talked about earlier because of one reason. It does allow INSERTS. Yes, no other users/transactions can update the rows  in the transaction, but they can always insert new rows amid the rows we have already locked. The newly inserted rows are called Phantom rows. This is exactly what our problem is that we don’t want to let other users insert until transaction is finished. Hmm.. since I was desperate to find a solution, I moved on and read about the next isolation level.

Serializable:

This isolation level will place a range lock on all the data we have read, means whatever rows come in that range will not be allowed to be updated or deleted, and no insertion will also be possible within that range. This level does what I wanted as it is the most restrictive locking. We can use HOLDLOCK as it has the same effect as using Serializable on all tables in SELECT statement in a transaction.

/* select query */  
SELECT MAX(ItemNumber) FROM ItemTable WITH (HOLDLOCK)

But, why do they say “Serializable is prone to cause deadlock”? Should I be worried about it? I think yes, because now I am eyeing on the most optimized solution and it is fair to be threatened by every warning it gives. 

Serializable and Repeatable Read may cause deadlocks

I can’t explain this better than MSDN:

“The transaction reads data, acquiring a shared (S) lock on the resource (page or row), and then modifies the data, which requires lock conversion to an exclusive (X) lock. If two transactions acquire shared-mode locks on a resource and then attempt to update data concurrently, one transaction attempts the lock conversion to an exclusive (X) lock. The shared-mode-to-exclusive lock conversion must wait because the exclusive lock for one transaction is not compatible with the shared-mode lock of the other transaction; a lock wait occurs. The second transaction attempts to acquire an exclusive (X) lock for its update. Because both transactions are converting to exclusive (X) locks, and they are each waiting for the other transaction to release its shared-mode lock, a deadlock occurs.”

To avoid this potential deadlock problem, update (U) locks are used. Since HOLDLOCK only applies shared range locks, we can always complement this with UPDATE to turn into update range locks.


/* to see what locks HOLDLOCK applies – same can be repeated for UPDATE as well */
BEGIN TRANSACTION
SELECT MAX(ItemNumber) from Orders with(HOLDLOCK)
EXEC sp_lock @@SPID
ROLLBACK

So, that means our final query should look something like this:

 
/* select query */
BEGIN TRANSACTION 
SELECT MAX(ItemNumber) FROM ItemTable WITH (HOLDLOCK, UPDATE)

/* insert query */
INSERT INTO ItemTable (ItemNumber, {column2}, {column3})
VALUES (@ItemNumber, {some_value}, {some_value})
END TRANSACTION

We should now be able to generate our own *unique* identities in a distributed environment without worrying about deadlocks using appropriate isolation level and locking.


HTH,

Using Generics Judiciously

Generics is one of the important features C# offers. It was one of biggest changes when C#2.0. was announced. As it name suggests, It helps developers write generic code using generic (unknown) types that are replaced with actual types at JIT time.

Generic types and functions, of course, offer many advantages e.g. they enhance performance, make code more expressive, move a lot of safety from execution time to compile time etc. One of the advantages which I like most is that it avoids casting and duplicate code, something which we used to do before Generics were introduced. I have used Object type in the past in order to make the code generic when the actual (specific) types were not known in advance.

The thing which I learnt today is the fact that by just looking at the function signatures one can’t decide whether these functions should be converted into Generic. They may share the same name but not the same implementation. For example consider below functions I encountered:



Public Shared Function GetXSLTTransformedXML(ByVal xmlLocation As String, ByVal xsltLocation As String, ByVal args As XsltArgumentList) As String

Dim document As New Linq.XDocument()
Dim xsltTransformer As Xsl.XslCompiledTransform = New Xsl.XslCompiledTransform()
Dim transformedXML As String = String.Empty

Try
document = Linq.XDocument.Load(xmlLocation)
xsltTransformer.Load(xsltLocation, New Xsl.XsltSettings(False, True), New XmlUrlResolver())
transformedXML = GetXSLTTransformedXML(document, xsltTransformer, args)

Catch ex As Exception
Throw ex

End Try

Return transformedXML

End Function

Public Shared Function GetXSLTTransformedXML(ByVal document As Linq.XDocument, ByVal xslt As XslCompiledTransform, ByVal args As XsltArgumentList) As String

Dim memStream As New System.IO.MemoryStream()
Dim writer As XmlTextWriter
Dim streamReader As StreamReader
Dim transformedXML As String = String.Empty

Try
writer = New XmlTextWriter(memStream, System.Text.Encoding.UTF8)
xslt.Transform(document.CreateReader(), args, writer)
writer.Flush()
memStream.Position = 0
streamReader = New StreamReader(memStream)
transformedXML = streamReader.ReadToEnd()

Catch ex As Exception
Throw ex

Finally
memStream.Close()
writer.Close()
End Try

Return transformedXML

End Function

I got excited when I looked at them and thought let’s convert them into one generic function because both share the same signature and differ in parameters. Later, I realized that I will not get the advantage Generics bring to the code if both the functions do not share the same implementation as well.

For example in the above functions, I want to execute a different branch of code based upon the first parameter. Knowing this, If I go on converting them into one Generic function, I will end up type casting the first parameter to decide which code to execute. If I have to type cast it then it kills the whole purpose of using Generics. Not only that, it also kills the intuitiveness of the code also. My fellow developer can easily identify what function to call if he has an XML document location leaving the other implementation for those who have XML document loaded in the cache. If it were Generic, It would be difficult for people.

I don’t mean that Generics is bad in anyway. I believe it is just not best for this kind of situation. If I have a MakeList<T> kind of function, I would not think twice.


Public Function MakeList(of T)(ByVal first As T, ByVal second As T) As List
'Builds the same list containing parameters irrespective of their types.
End Function

HTH,

Posting data from a web service to a web page

Writing and consuming web services is one of the important things we do at work. Systems running a variety of operating systems and with a completely different set of configurations make it difficult to exchange data easily. Things get even worse when there is a difference in time zones. We really didn’t want to wait for our integration partner to come back to us with a feedback whether they received the data or not. We rather decided to take ‘just-try-it” approach to post (HTTP-POST) web service data to a web page on localhost and it worked. I know it is not a rocket science, but it saved me time and effort, so I thought of sharing it.

So, this is what we do to send data to a web service:



Try

stream = File.OpenText( "C:\ServiceData.xml" )
strXML = stream.ReadToEnd()
stream.Close()

webreq = HttpWebRequest.Create("{Web-Service-Uri}")
webreq.Method = "POST"
webreq.ContentType = "application/x-www-form-urlencoded"

strXML = "data=" & HttpUtility.UrlEncode(strXML)
bytes = System.Text.Encoding.UTF8.GetBytes(strXML)
webreq.ContentLength = bytes.Length

requestStream = webreq.GetRequestStream()
requestStream.Write(bytes, 0, bytes.Length)
requestStream.Close()

webResponse = webreq.GetResponse()
responseStream = New IO.StreamReader(webResponse.GetResponseStream())
responseStream.Close()
webResponse.Close()
webreq = Nothing

Catch ex As System.Net.WebException

Dim responseFromServer As [String] = ex.Message.ToString() & " "
If ex.Response IsNot Nothing Then
Using response As Net.WebResponse = ex.Response
Dim data As Stream = response.GetResponseStream()
Using reader As New StreamReader(data)
responseFromServer += reader.ReadToEnd()
End Using
End Udsing
End If
End Try

The above code works fine if you want to send data to a web service. However, to send to a webpage all we have to do is replace the service uri {Web-Service-Uri} on line 07 with web page address, something like this:


webreq = HttpWebRequest.Create("{Web-Service-Uri}") –> Posting to a web service


webreq = HttpWebRequest.Create("http://localhost:1886/service-data-receiver.aspx" ) –> Posting to a web page

The part which I had to figure out was how to access the data sent from a web service to my page. Since the request was not originated by a page-based model, I couldn't use Request.Forms or Request.QueryString. After some research I found that it is just Request object that can be used to access data. Yes, that is it.



Protected Sub Page_Load(ByVal sender As Object , ByVal e As System.EventArgs) Handles Me.Load

Dim response1 As String = HttpUtility.UrlDecode(Request( "data"))Dim doc As XDocument = XDocument.Parse(response1)
doc.Save( "C:\inetpub\wwwroot\service-data-received.xml" )
Response.Write(response1)

End Sub

HTH,

LINQ to XML ― brings new reasons to use more XML

I recently did a small talk about the benefits of using new XML API “LINQ to XML”. According to MSDN:

LINQ to XML provides an in-memory XML programming interface that leverages the .NET Language-Integrated Query (LINQ) Framework. LINQ to XML uses the latest .NET Framework language capabilities and is comparable to an updated, redesigned Document Object Model (DOM) XML programming interface.
The talk wealso requirednt very well, and luckily I managed to get the attention of the audience, because the approach I adopted was a little different than the normal. Instead of talking plainly about the new functions and properties, I rather tried to draw a comparison between the way we deal with the XML using existing and new API. I also shared why VB developers are more excited about this API than C# guys, and what is making them feel more privileged.

The core functionality of the new API revolves around 3 key concepts.

  1. Functional Construction
  2. Context-Free XML creation
  3. Simplified NameSpaces

Functional Construction:

The ability to create the entire XML tree or part of it by just using one statement. If you are someone like me who doesn’t play with XML day-in and day-out, you have to probably recall for a second how do you create XML and using which API. This is so true because the depth and breadth of XML API choices available to us today is overwhelming. For example,

  • XMLTextReader: for low-level parsing of XML documents.
  • XMLTextWriter: fast, non-cached, forward-only way of generating XML
  • XMLReader: read-only, forward-only API generally used to deal with large XML documents.
  • XMLDocument, XMLNode and XPathNavigator etc. etc.

So, if I want to create the below book XML in my application, I can either use XmlTextWriter.WriteStartElement() or I can also use XMLDocument.CreateNode() If document manipulation is also required.



<books>

<book>

<title>Essential .NET</title>

<author>Don Box</author>


<author>Chris Sells</author>

<publisher>Addison-Wesley</publisher>

</book>

</books>

Though, there is nothing wrong with both of the approaches mentioned above except the fact that they take more lines of code and more time as well just to churn out a tiny piece of XML. LINQ to XML aims to solve this problem by introducing XElement which takes params array as a parameter in one of its constructors allowing us to write entire XML tree in one statement.



XElement elements = new XElement("books",

new XElement("book",
new XElement("title", "Essential .NET"),
new XElement("author", "Don Box"),
new XElement("author", "Chris Sells"),
new XElement-W("publisher", "Addisonesley")
)
);

Context-Free XML creation:

When creating XML using DOM, everything has to be in context of parent document. This document-centric approach for creating XML results in code hard to read, write and debug. In LINQ to XML, attributes have been given first-class status. So, rather than going through factory methods to create elements and attributes, we can use compositional constructors offered by XElement and XAttribute class.

If I want to add an ISBN number as an attribute to the book element in the above book XML, I can simply write:



XElement elements = new XElement("books",
new XElement("book", new XAttribute(“ISBN”, “0201734117”),
new XElement("title", "Essential .NET"),
new XElement("author", "Don Box"),
new XElement("author", "Chris Sells"),
new XElement("publisher", "Addison-Wesley")
)
);

Simplified Namespaces:

I believe this is the most confusing aspect of XML. With the existing set of API, we have to remember many things like XML names, NameSpaces, prefixes associated with the NameSpaces, Namespace managers etc. LINQ to XML allows us to forget everything else and just focus on one thing called “fully expanded name” which is represented by XName class.

Let’s see how this new functionality differs from the existing one by taking an example of RSS feed of my blog. In the RSS document, which can be accessed from here http://feeds.feedburner.com/feed-irfan (right click → view source), I am interested in “totalResults” element which is prefixed by “openSearch”. This is how I do it using XMLNameSpaceManager which has been part of the .NET framework for a long time.



XmlDocument rss = new XmlDocument();

rss.Load("http://feeds.feedburner.com/feed-irfan");

XmlNamespaceManager nsManager = new XmlNamespaceManager(rss.NameTable);

nsManager.AddNamespace("openSearch", "http://a9.com/-/spec/opensearchrss/1.0/");

XmlNodeList list = rss.SelectNodes("//openSearch:totalResults", nsManager);

foreach (XmlNode node in list)
{
Console.WriteLine(node.InnerXml);
Console.ReadLine();
}


You can see I have to create XMLNameSpaceManager, add a namespace, remember the syntax of the query, provide the manager as a parameter…huh...too much of work. LINQ to XML says, forget about XMLNameSpaceManager, and create a fully expanded name and use it every time.



XElement rss = XElement.Load("http://feeds.feedburner.com/feed-irfan");

XNamespace ns = "http://a9.com/-/spec/opensearchrss/1.0/";

IEnumerable<XElement> items = rss.Descendants(ns + "totalResults");

foreach (XElement element in items)
{
Console.WriteLine(element.Value);
Console.ReadLine();
}

We can also take a look at how exactly we can load, create and update XML using LINQ to XML API.

Loading XML


  • Loading from URL:
    XElement feed = XElement.Load("http://feeds.feedburner.com/feed-irfan");

  • Loading from file:
    XElement file = XElement.Load(@"book.xml");

  • Loading from String:
    XElement document = XElement.Parse("<books><book><title>Essential.NET</title><author>Don Box</author><author>Chris Sells</author><publisher>Addison-Wesley</publisher></book></books>");

  • Loading from a reader:
    using (XmlReader xReader = XmlReader.Create(@"book.xml"))
    {
    while (xReader.Read())
    {
    if (xReader.NodeType == XmlNodeType.Element)
    break;
    }
    XElement messages = (XElement)XNode.ReadFrom(xReader);
    Console.WriteLine(messages);
    Console.ReadLine();
    }

  • XDocument:
    You may wonder If for every kind of load we use XElement, what is then the purpose of XDocument then? XDocument can be used whenever we require additional details about the document e.g. document type definition(DTD), document declaration etc. These are details which XElement doesn’t seem to provide.

Creating XML

Functional construction key concept that I mentioned above, defines the way XML is created using LINQ to XML. We have also seen above how to create an XML tree with fully qualified names. We can now take a look at how to associate a prefix with a namespace while creating an XML document.

Associating prefixes is just a matter of creating an XAttribute with appropriate values in the constructor and supplying it to XElemennt prefix is going to be associated with.



XNamespace ns = "http://www.essential.net"

var xml2 = new XElement("books",
new XElement(ns + "book", new XAttribute(XNamespace.Xmlns + "pre", ns),
new XElement("title", "Essential .NET"),
new XElement("author", "Don Box"),
new XElement("publisher", "Addison-Wesley")
)
);

XML Literals

As I mentioned in the beginning of this post that there is something in this API exclusively for VB.NET 9.0(+) developers. It is a new offering called “XML Literal” that enables developers to embed XML directly within VB.NET code. We have seen how to create book XML using Functional Construction above. Let’s now see how the same can be done using XML Literal:



Dim bookXML As XElement = <books>
<book>
<title>Essential .NET</title>
<author>Don Box</author>
<author>Chris Sells</author>
<publisher>Addison-Wesley</publisher>
</book>
</books>

bookXML.Save("book.xml", SaveOptions.None)

Rather than creating LINQ to XML object hierarchies that represent XML, VB guys instead can define the entire XML using XML syntax. And, If they want to make it more dynamic, they can also use ASP.NET code nuggets (<%= %>) which is called “expression holes” to embed the dynamic values into XML Literals.



Private Sub GetBookXML(ByVal bookName As String, ByVal publisher As String, ByVal ParamArray authors As String())

Dim customAttrib = "ISBN"
Dim bookXML As XElement = <books>
<book <%= customAttrib %>=<%= "0201734117" %>>
<title><%= bookName %></title>
<author><%= authors(0) %></author>
<author><%= authors(1) %></author>
<publisher><%= publisher %></publisher>
</book>
</books>

bookXML.Save("book.xml", SaveOptions.None)

End Sub

XML Axis Properties

Another unique feature which is available only in VB.NET 9.0 is “XML Axis properties”, which allows XML axis methods to be called using more compact syntax. Let’s take a look at those properties


  1. Child Axis Property
    This property allows all the child elements to return with a particular name. For example, I am looking for <author> element in my book XML. Using Child Axis Property I can directly say:
    Dim authorName as String = bookXML.<book>.<author>(0).Value 

    And, If you are interested in all the authors:
    Dim authors As IEnumerable(Of XElement) = bookXML.<book>.<author>
    Dim authors As List(Of String) = (From author As XElement In authors _
    Select author.Value).ToList()

  2. Descendent Axis Property
    It returns all the decedent elements that have the qualified name that is specified within the angle brackets. To see how it works, we’ll use the XML we produced using GetBookXML() method in XML Literal section explained above as input.


    Dim elements As IEnumerable(Of XElement) = bookXML. . .<title>.Where(Function(t) CInt(t.@ISDN) > 1)
    For Each e As XElement In elements
    Console.WriteLine(e.Value)
    Next

  3. Attribute Axis Property
    This property returns the string value of the attribute that has the qualified name that is specified after the “@” character.
    We have already seen an example of this property in the previous “Descendent property” section where we tried to get all the book titles by providing their ISDN property values to the WHERE clause.
    Another example could be a tiny piece of code that returns all the ISDN number in the entire bookXML document that we saved earlier.

    Dim ISDNList As New List(Of String)
    Dim elements As IEnumerable(Of XElement) = bookXML. . .<title>

    For Each e As XElement In elements
    ISDNList.Add(e.@ISDN.Value)
    Next

XML axis properties help a great deal in searching in XML documents. By having this shorthand syntax for accessing the primary XML axes, Visual Basic developers can stay focused on the XML they are trying to consume. As I said earlier, for the developers who deal with XML everyday, learning and understanding XPath is not a problem. However, for those like me who use XML rarely, Axis properties being a no-brainer has more attraction.


HTH,

Google dictionary tooltip – a cool thing

Have you ever seen something on the internet that surprised you a great deal? I saw something cool today that made me realize that there are companies out there like Google© that do clever things in their work even if it is a very small thing. I am talking about Google dictionary extension for chrome which employs some cool techniques to render the tooltip arrow. I have seen a couple of tooltip implementations online like on LinkedIn® and Twitter, but all they do is use an image for the arrow and sometimes complete tooltip balloon as an image. What caught my attention in Google Dictionary balloon was the use of simple plain html divs, and after I dug deeper, I realized there is no magic going on in there. Check this out.

Using chrome on a Wikipedia page for “Programmer”, I look up the meaning of this terminology in Google dictionary.

img-01

 

I do an “Inspect element” to see what styles arrows has.

img-02

 

The div I am interested in is next to the div with class “gd-bubble”, which is just a container for two inner divs which are creating the arrow effect. If you click the second inner div of this container, you can get all the styles. In order to figure out what exactly is happening there, I give some different colors to its borders (left,right and top).

img-04

 

As soon as I give the colors, it (second inner div) will look like this:

img-03

 

Next, in addition to border colors, I now try to change its height and width and see what happens.

img-05

 

It is now time to touch the first inner div as well. We give a green color to its top border (border-top) in order to distinguish it from its sibling (second inner div). The result I get is this:

img-06

 

To my surprise, the tooltip arrow is nothing but just a box with its bottom border (border-bottom) stripped off. Now, In order to give it an arrow shape, what I have to do is reduce its width to zero.

img-07

 

And, changed the height to “auto”.

img-08

Now change the border colors of the second inner div back to “transparent”.

img-09

 

As well as change the border-top color of the first inner div back to “transparent” and there I go. What I get is not a multi-color box, but a nice tooltip arrow pointing downwards.

Untitled-10

 

I know tooltip is not a big thing, but the point is that at times even the smallest thing can make a big difference in the performance of the site. It doesn’t imply at all that using an image will slow down the rendering of the page for the image can be cached, but when I can use native html to accomplish something, then why not.

 

HTH,

Commonly used JQuery methods and snippets

It’s been a long time since I have written anything on my blog. I seriously feel that it is turning into an “update-only-when-I-have-time” kind of thing. I don’t like it, but can’t do anything. That’s the way life goes. Anyways, there is something I have to say about JQuery. I am very much impressed with it and the way it is making everybody’s life easier. It is like the social networking fever in the software development industry where everybody seems to be talking about JQuery, its plug-ins and trying to find new ways to use it in one way or the other. It is a good thing. I also use it whenever I see even a little room to insert JavaScript in web applications. I can’t recall when was the last time I used  document.getElementById():). JQuery is taking over the world. All we need to do is to keep up with its pace if we want to get ahead in web development business.

The thing which bothers me more than anything else whenever I use JQuery is the fact that it’s all scattered. A small number of commonly used JQuery methods are being used everywhere repeatedly without change. This is definitely not a problem with JQuery, but with me as I am a long time fan of putting all the JavaScript dirty stuff in one file. I guess the same can’t be achieved with JQuery for its methods are tightly coupled with the HTML elements on the page.

One might think what I am trying to achieve here. My intention is to have all the JQuery methods in one place so that I can come back to them whenever I need them. I know the JQuery documentation is there exactly for this purpose, but I guess that documentation is not control-driven rather functionality driven. For example if I am using a Dropdown control on my page and using JQuery want to find out its selected value, I have to undergo the same struggle I went the last time in order to figure out all the possible methods I can use to accomplish the task. How about creating a control driven JQuery wrapper that will wrap the functionality of JQuery library core methods? err…that was just of the top of my head.

May be my thinking is not legitimate and it is just a matter of spending more time around JQuery and I will start memorizing every bit of it. May be. Anyways, for the time being, I have decided to help myself by compiling a small list of JQuery methods I HAVE USED SO FAR in different scenarios in order to commit it to my memory. This list will be updated with more methods in future as I use them.

  • Checkbox list has at least one item CHECKED (.length())
    example:
    if ($("div.container input:checked").length == 0)

  • CHECK/UNCHECK a Checkbox (.attr(name,value) / .removeAttr(name))
    example:
    if ($("input:checkbox").attr("checked") == true)
    $("input:checkbox").removeAttr("checked");
    else
    $("input:checkbox").attr("checked", "checked");

  • Dropdown list has a SELECTED item (.val())
    example:
    if ($("div.container select.dropdownlist").val() != -1)

  • VALUE of the Selected Checkbox (.attr())
    example:
    $("div.container input:checked").attr("value"); 

  • TEXT between <SPAN> and </SPAN> (html())
    example:
    $("div.container input:checked").find("span").html();

  • Setting the VALUE of a Textbox or a Hidden field (.val(value))
    example:
    $("div.container").siblings("#<%=hSelectedItem.ClientId%>").val(selectedVal);

  • Getting a STYLE property from first matched element (.css())
    example:
    if ($(this).siblings('.container).css("display") == 'none')) 

  • Animating the element OPACITY  (.fadeIn() / .fadeOut())
    examples:
    $(this).siblings('.itemlist').fadeIn(200);     
    $(this).siblings('.itemlist').fadeOut(200);

  • Switching between EXPAND and COLLAPSE images (.attr(name,value))
    example:
    $(this).children("img").attr("src", '<%=ImagePath%>Collapse.png');  
    $(this).children("img").attr("src", '<%=ImagePath%>Expand.png');

  • Changing the CSS CLASS NAME (.addClass(name))
    example:
    $(this).addClass("selected");

  • Creating the <OPTGROUP>s in an ASP.NET Dropdown list (.wrapAll())
    I did I blog post here about it a while ago where I have shown how <OPTGROUP>s can be created in an ASP.NET Dropdown list using just two lines of JQuery code.
    example:
    $("select.dropdownlist option[@type='1']").wrapAll('<optgroup label="Level1">'); 
    $("select.dropdownlist option[@type='2']").wrapAll('<optgroup label="Level2">');

  • Getting a COMMA SEPERATED LIST of VALUES of selected Checkboxes (.SerializeArray())
    The below small function makes use of JQuery .SerializeArray() method which returns form elements as an array of name/value pair. Each element’s value will then be pushed into another array and returned as a comma separated string.
    function GetSelectedApplicants()     
    {
    var fields = $("tr td input:checkbox:checked").serializeArray();
    $.each(fields, function(i, field)
    {
    selectedApplicants.push(field.value);
    });
    return selectedApplicants.join(',');
    }

  • Creating a DYNAMIC TOOLTIP containing the data from ASP.NET web server. (.ajax())
    I wanted to create a fancy dynamic tooltip like the one on LinkedIN® and Google Dictionary Lookup in Chrome. I ended up using JQuery .ajax() method which is calling ASP.NET server-side function asynchronously and after retrieving the data populating the tooltip container with HTML elements around the data.
    function GetItemList(item, appID) 
    {
    $.ajax({
    type: "POST",
    url: "ListApplicants.aspx/GetFolderList",
    data: "{id: " + appID + "}",
    contentType: "application/json; charset=utf-8",
    dataType: "json",
    success: function(msg)
    {
    //hiding the loading bar once the resluts has been retrived.
    item.parent().children(".loadingbar").css("display", "none");
    item.parent().append(
    '<div class="toolTipWrapper">'
            + '<div class="closex" onclick="closex();">[x]</div>'
                    + msg.d
            + '<div class="arrowContainer">'
            + '<div class="arrowInner"></div>'
        + '<div class="arrowOuter"></div>'
    + '</div>'
    + '</div>'
    )
    }
    });
    }

  • Clearing the TEXT field on focus (.focus())
    Sometimes we need to display a default text in the textbox like “Click here to search” or “email address” to inform the user what needs to go in the textbox. The below snippet makes it easier for us to clear the text as soon as the user clicks on the textbox to type something.
    $("input.text").focus( function() {     
            if ( $(this).val() == '(email address)' || $(this).val() == 'Site Search...' )
            $(this).val('');                
    } );

  • Selecting an element which does NOT have a particular stylesheet. (:not())
    example:
    $("div.container div p:not(.quote)").css('display', 'none');

  • Appending the content of an element to the end of another element (.append())
    example:
    $("td.column").append($("div.row").contents());

Honestly, I have no intension to write another JQuery documentation, but at times it makes a lot of sense to have everything we have used so far in one place, specially if it is just one group of things being repeated again and again. The ultimate goal is to scribble down the way I have used JQuery methods with web/HTML controls.


HTH,

Should it be a user control or a server control?

You must have heard this question in the developer community or in online forums, or at least at your work place. Though, the concept of user and server controls in .NET is pretty old, some developers still get confused when it comes to making a choice between both of them. I myself used to go wrong with the selection and would pick one over the other, mainly because of a very thin line (ok ok..I heard you..a little thick) between them.

Up until now, I was fond of using UserControl for all of my projects. The ratio of picking a user control over server control always used to be somewhere close to 10:1 (probabilistically, 1.0 to 0.1) for me. However, it was then when I received a comment for my experimental work on Code Project article MultiSelect Dropdown Control about my decision being an unthoughtful over choosing a UserControl as the parent class, I decided to enlighten myself more about both the control types and find out why I feel it a thin line what is a very clear distinction, and why I fail to take their design guidelines into account when it comes to authoring them.

I believe one of the important reasons to go for a user control is its simplistic nature. It can be created as easily as a web page and Its design time development support for authoring makes things a lot easier. We don’t need to override the Render() method as we get out-of-the-box support for rendering. A user control is more suitable choice when target control is composite (a collection of other intrinsic controls) in nature and has a lot of static data. On the downside, it is less convenient for advanced scenarios due to its tight coupling with the application it is built for. If the same user control needs to be used in more than one application, it introduces redundancy and maintenance problems as its source form (.ascx) needs be copied and its reference must be provided on the hosting web page. We don’t get to see this kind of problem while dealing with server controls. A server control can be a best choice in the following scenarios:

  • Redistributable
  • Dynamically generated content
  • zero maintenance ― single DLL which can also be added in the GAC
  • Support to add the control in Visual Studio Toolbox.

A server control helps reduce redundancy as it is only a single DLL which can very easily be consumed in more than one application. According to Microsoft Support

"A server control is more suited for when an application requires dynamic content to be displayed; can be reused across an application, for example, for a data bound table control with dynamic rows”

Server controls provide full design-time support when used in a design-time host. It can be added to the toolbox of a visual designer and dragged and dropped onto a page. However, I believe the biggest disadvantage of using a server control is that, it requires to be written from the scratch for which one has to have a good understanding of page lifecycle and the order in which events execute which is normally taken care of in user controls.

Now, coming back to the title of my post "what should we choose", a user control or a server control? I would say such a decision should be a thoughtful one. If you want rapid application development (RAD) with full design time support without understanding the page life cycle, user control is the way to go. On the other hand, If ease of deployment with no redundancy and maintenance headaches is more important for you, think about custom server control.

HTH,

Creating User Controls ― a few good practices

In my previous post, Dealing with GAC atrocities, I talked about code reusability being an important aspect of rapid application development (RAD). User Controls in .NET support code reusability out of the box. Just to remind ourselves, a user control is a control that is created using the same technique we use for creating ASP.NET web pages. We create user controls every now and then in our everyday programming and, at times, we forget to entertain some of its important aspects during the development. For example exposing properties that define its behavior and layout , taking care of the situation where a web page can have multiple instances of a user control etc.

As you probably know that I love to follow the guidelines and like to stick to the checklist whenever possible. I decided to create one for myself containing good practices for user control development. The idea is to come back to this list to see what should go in the user control and what should not. Though, these are not industry standard best practices, I find them very useful in most of the scenarios. Please have a look and let me know what do you have to say about it. Moreover, If there is anything, a procedure, a guideline or even a practice that you think helps you in anyway while writing a user control and saves your time, share it with the world in the comments below.

  1. Every single behavior of a user control should be represented by a public property. More the properties it has, the more it is customizable for the end user. I never forget to write properties that define the important characteristics of my list-type control e.g. AutoPostBack.
  2. Private _AutoPostback As Boolean
    Public Property AutoPostback() As Boolean
    Get
    Return _AutoPostback
    End Get
    Set(ByVal value As Boolean)
    _AutoPostback = value
    End Set
    End Property

    Private _AllowParentSelect As Boolean
    Public Property AllowParentSelect() As Boolean
    Get
    Return _AllowParentSelect
    End Get
    Set(ByVal value As Boolean)
    _AllowParentSelect = value
    End Set
    End Property

  3. There will always be only one instance of a user control on a web page is a blind assumption. If the user control looks good and is designed to keep the user friendliness and intuitiveness in mind, people would love to use it than once, sometimes on the same page which might lead to a conflict between their child control IDs. To deal with it, ClientID property can be used in the HTML as well as in the back end.
  4. If AutoPostback = True AndAlso Request.Form("__EVENTTARGET") IsNot _
    Nothing AndAlso Request.Form("__EVENTTARGET").Equals(Me.ClientID + "_categoryMenu") Then

    RaiseEvent OnSelectedIndexChanged(Me, Nothing)

    End If

  5. All the external script files (JavaScript etc.) should always be registered using Page.ClientScript.RegisterClientScriptInclude and Page.ClientScript.IsClientScriptIncludeRegistered method to avoid registering duplicate scripts resources for multiple instances of user control on a page.
  6. Private Sub RegisterClientScriptIncludes()

    Dim clientScriptMgr As ClientScriptManager = Me.Page.ClientScript

    If clientScriptMgr.IsClientScriptIncludeRegistered(Me.GetType(), "mcDropdown") = False Then
    clientScriptMgr.RegisterClientScriptInclude(Me.GetType(), "mcDropdown", ClientScriptIncludePath)
    End If

    End Sub

  7. The user control should have no knowledge (no hard-coding), unless required, of any project/application-wide configurations e.g. resource (image/script) file path, connection strings, database column names of the supplied data source, default values etc. We can maintain the abstractness of a control by passing all these values from the page as properties.
  8. Private _DropdownListStyles As String
    Public Property DropdownListStyles() As String
    Get
    Return _DropdownListStyles
    End Get
    Set(ByVal value As String)
    _DropdownListStyles = value
    End Set
    End Property

    Private _DataTextField As String
    Public Property DataTextField() As String
    Get
    Return _DataTextField
    End Get
    Set(ByVal value As String)
    _DataTextField = value
    End Set
    End Property

    Private _DataValueField As String
    Public Property DataValueField() As String
    Get
    Return _DataValueField
    End Get
    Set(ByVal value As String)
    _DataValueField = value
    End Set
    End Property

  9. There is a possibility of a user control being used on a page which does not contain any .NET default server control, hence no sign of __doPostBack JavaScript function definition which can be a serious problem if the user control has to do a PostBack. We can harness GetPostBackEventReference() under ClientScriptManager which returns a string that can be used in a client event to cause Postback to the server.

  10. There is a consistency in the property names e.g. DataSource, DataTextField, DataValueField for all ASP.NET data bound controls because of their support to the standard Windows Forms data-binding model. This helps developers gain familiarity with the control very quickly. We can try wherever possible to maintain this level of friendliness in our user control as well by keeping the property and method names as close to that of default controls. The same goes for event names as well. For example for a user control that is going to behave like a list-type control, we can think of having properties such as DataTextField, DataValueField, DataSource, SelectedIndex etc. and events like SelectedIndexChanged.
  11. Private Sub ddlApplicationStatus_SelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) _
    Handles ddlApplicationStatus.OnSelectedIndexChanged
    Try

    Dim ctlDropdown As mcDropdown = CType(sender, mcDropdown)
    Dim selectedVal As Integer = ctlDropdown.SelectedValue
    Dim selectedText As String = ctlDropdown.SelectedText

    Catch ex As Exception
    'log exception here...
    End Try

    End Sub

  12. In addition to method and events, a list-type user control should also expose ListItemCollection that gives complete control of the items and facilitates adding, inserting, removing and finding items.

  13. We need to make sure the control is able to maintain its state on PostBacks. It goes in line with the best practice of loading controls when the page is being rendered for the first time.
  14.  If Not Me.IsPostBack Then
    InitializeUserControls()
    End If

  15. Instead of inheriting our control class directly from System.Web.UI.UserControl class every time, we should first look for an abstract class if it is available that defines the same properties, methods, and events common that we need for our user control. For example ListControl class can be served as the abstract class for all list-type controls and DataBoundControl class for the controls that display their data in list of tabular form.

  16. The default values should be specified for the control properties, hence allowing the user to get the control up and running in very little time.
  17. Private _AllowParentSelect As Boolean
    Public Property AllowParentSelect() As Boolean
    Get
    Return _AllowParentSelect
    End Get
    Set(ByVal value As Boolean)
    _AllowParentSelect = value
    End Set
    End Property

HTH,

Dealing with GAC atrocities

Code reusability is one of the important pillars of Object Oriented programming (OOP). I like this feature a lot as It doesn't make sense to write a crucial piece of code again and again and in multiple places, which is actually being shared across applications. Why not to write it once and reuse it everywhere. Classes in OOP can be a very good example of this, which can be written once and used as many times as we want by initializing them as objects.

Anyone, who works with Visual Studio knows that referencing an assembly in the project paves the way towards code reusability. On the other hand, in order to achieve this same thing while in the production environment, we need to install assemblies in the Global Assembly Cache(GAC) . There are a lot of advantages of using GAC as a shared repository, but this post is not an effort towards that as there are a bunch of articles already available on the internet. What I want to share here are the concerns which I have listed below, which may come to minds before deciding on to go down this route of using GAC for our projects.

  1. Let's consider a scenario where we want to share a DLL say ExceptionHandler.dll between two applications HelloWorld and HelloUniverse. We have signed the assembly (DLL) using a strong name key and installed it in the GAC on the production server so that it would be utilized by both the applications. We have deployed the applications where they seem to be accessing the DLL from GAC. So far so good. Now assuming, the ClassLibrary project (ExceptionHandler.dll) is part of a solution which also contains either of the applications let's say HelloWorld, there is a possibility while building the application someone from the team can unintentionally do a FULL Debug of the solution, leaving the version number (build + revision) changed. Now, a week later our HelloUniverse application which is also pointing to the same DLL project undergoes a change and is deployed alone to the server assuming it will pick the same DLL that was deployed earlier and eventually he gets this:

    Could not load file or assembly 'ExceptionHandler, Version=3.1.65.66, Culture=neutral, PublicKeyToken=a2e77a5f9a0ce598' or one of its dependencies. The system cannot find the file specified.

  2. Let's assume for the time being that the situation above will never happen as there is no room for an unintentional behavior in your team. But, You will agree with me on the fact that using GAC as a shared repository motivates one towards having multiple versions of the same DLL as there is a native support for that. When it was designed to overcome problems like DLL HELL, then why not to use it. It is a very valid point to support applications still relying on the older version. One of the concerns is that if I come back to my GAC after a year, I will find it cluttered with multiple copies of the DLL with same name but with different version numbers.


  3. If the above scenario proves to be true, will it not become a manageability issue to keep a track of assembly versions on both production as well as development ends considering we have a reasonably big number of applications.

I am in no way trying to discourage anyone from using GAC, rather giving my point of view on the issues that get in our way whenever we decide to use it as a shared folder for assemblies. I may be wrong and may be these issues will never come for someone or they may not be a big concern for a few, but I believe sometimes it is good to be paranoid of implementing new technology or suggesting it over an existing one. This may give us an opportunity to think left, right and center before we get our hands dirty with it.

Before I finish, I would like to briefly comment on the way I believe above issues can be addressed. For the first point, each application or set of applications can point to their own copy of the shared DLL leaving no room for errors. The second and the third issues can be tackled by building applications targeting a single stable version of DLL at some point in future, which then can be replaced with all of the older DLLs with multiple versions in the GAC.

HTH,