Live Online SharePoint Saturday - EMEA (Free)

To all ya SharePoint ppl. based in EMEA region:

Live Online SharePoint Saturday - EMEA

HTH,

ASP.NET DropdownList with <optgroup>

If you are looking for an easy and simplest solution to create groups in the ASP.NET DropdownList control without writing unnecessary amount of backend code, then you are at the right place. Let me show you how we can do this using JQuery 1.2.6.

I have divided the solution into two different parts. First part explains achieving the desired output using JQuery's native support for wrapping elements into other elements with the help of wrapAll() method. It takes only 1 line of code and we can have the groups created in the HTML. However, the drawback of this approach is the groups will disappear on postback. But if the DropdownList is small and we are not loading it from the database, then this approach is very suitable as it will save a lot of time and effort.

Ok, enough of the riff-raff and get down to business now.

Part 01 (static binding of list items with no support for postback)

Step 01: Creating a DropdownList

ListItemCollection list = new ListItemCollection();

list.Add(new ListItem("1", "1"));
list.Add(new ListItem("2", "2"));
list.Add(new ListItem("3", "3"));
list.Add(new ListItem("4", "4"));
list.Add(new ListItem("5", "5"));
list.Add(new ListItem("6", "6"));
list.Add(new ListItem("7", "7"));
list.Add(new ListItem("8", "8"));
list.Add(new ListItem("9", "9"));
list.Add(new ListItem("10", "10"));

ddl.DataSource = list;
ddl.DataBind();

Step 02: Creating a new attribute
We have to create a new attribute for every DropdownList item. This new attribute will be used for grouping in the JQuery code.

protected void ddl_DataBound(object sender, EventArgs e)
{
foreach (ListItem item in ((DropDownList)sender).Items)
{
if (System.Int32.Parse(item.Value) < 5)
item.Attributes.Add("classification", "LessThanFive");
else
item.Attributes.Add("classification", "GreaterThanFive");
}

}

Step 03: The fun part - Creating groups using JQuery WrapAll() method

<script>
$(document).ready(function() {
//Create groups for dropdown list
$("select.listsmall option[@classification='LessThanFive']").wrapAll("<optgroup label='Less than five'>");
$("select.listsmall option[@classification='GreaterThanFive']").wrapAll("<optgroup label='Greater than five'>");
});

</script>


Part 02 (dynamic binding of list items with full support for postback)

This approach not only creates groups using JQuery 1.2.6 but also allows DropdownList to remember these groups across postbacks. We will have to create a custom server control using DropdownList class and override SaveViewState() and LoadViewState() methods that will help the page ViewState remember and restore groups respectively.

As mentioned earlier, in this approach we have to create a custom server control with a bit of a backend code and override some of it's important methods. However, if you are one of those lazy developers like me who don't like the idea of creating controls specially custom server controls that reside in a separate assembly for a very tiny job such as the one in hand, then you can create control that is part of the same assembly by adding a new class file and deriving it's class from System.Web.UI.WebControls.DropDownList.


namespace ControlLibrary
{
public class DropdownList : System.Web.UI.WebControls.DropDownList
{
protected override object SaveViewState()
{
// Create an object array with one element for the CheckBoxList's
// ViewState contents, and one element for each ListItem in skmCheckBoxList
object[] state = new object[this.Items.Count + 1];

object baseState = base.SaveViewState();
state[0] = baseState;

// Now, see if we even need to save the view state
bool itemHasAttributes = false;
for (int i = 0; i < this.Items.Count; i++)
{
if (this.Items[i].Attributes.Count > 0)
{
itemHasAttributes = true;

// Create an array of the item's Attribute's keys and values
object[] attribKV = new object[this.Items[i].Attributes.Count * 2];
int k = 0;
foreach (string key in this.Items[i].Attributes.Keys)
{
attribKV[k++] = key;
attribKV[k++] = this.Items[i].Attributes[key];
}

state[i + 1] = attribKV;
}
}

// return either baseState or state, depending on whether or not
// any ListItems had attributes
if (itemHasAttributes)
return state;
else
return baseState;

}

protected override void LoadViewState(object savedState)
{
if (savedState == null) return;

// see if savedState is an object or object array
if (savedState is object[])
{
// we have an array of items with attributes
object [] state = (object[]) savedState;
base.LoadViewState(state[0]); // load the base state

for (int i = 1; i < state.Length; i++)
{
if (state[i] != null)
{
// Load back in the attributes
object [] attribKV = (object[]) state[i];
for (int k = 0; k < attribKV.Length; k += 2)
this.Items[i-1].Attributes.Add(attribKV[k].ToString(),
attribKV[k+1].ToString());
}
}
}
else
// we have just the base state
base.LoadViewState(savedState);
}
}
}

If you search on the internet for a solution to create <optgroup> in ASP.NET DropdownList, you will find some of the articles talking about overriding RenderContents() method of DropdownList to create . However, I personally feel that this job can also be done with less of a hassle using JQuery WrapAll() method that wraps elements inside other elements. All we need is to have a mechanism to have these groups saved upon postbacks which we are doing here by overriding SaveVeiwState() and LoadViewState() as Scott Mitchell describes in his article. Apart from these 2 methods, we need to populate the DropdownList in Page_Load method and create a new attribute in the ddl_DataBound() event as explained in the part 01 above. That is all required to create <optgroup>s in an ASP.NET DropdownList. You can see the output control populated with ListItems which are properly encapsulated within optiongroups below.

The source code for this approach can be downloaded from here.

Output:




HTH,

Watchout for getElementById bug

Being a web developer targeting IE 7, We should be very careful with the way getElementById method behaves in IE 7. According to getElementById explanation on MSDN:

this method performs a case-insensitive match on both the ID and NAME attributes, which might produce unexpected results

We may NOT see any problem with this method as long as the ID and Name attributes of the controls on the webpage are same (which is the case most of the time). But, when values of these attributes are different e.g. while using a Master Page which mangles ID attributes by appending "_" and Name attribute using "$", getElementById in IE 7.0 might produce unexpected results. Read more about this bug here.

HTH,

ThreadAbortException ― This behavior is by design

If you are coding in .NET and want to transfer a web request from one page to another then you don’t have many options in terms of API to does that for you. You will either use Server.Redirect or Server.Transfer . The former does the client side redirection and later is responsible for server side. The problem with both of them is their dependency on Response.End which in turn calls Thread.Abort that causes ThreadAbortException (Reference: MSDN). To work around this problem Microsoft suggests the followings alternatives:

  1. Using Server.Execute instead of Server.Transfer
  2. Supplying endResponse parameter to Response.Redirect method to suppress the internal call to Resonse.End

In case of not implementing the above will result in showing output of 2 different web pages (caller and callee) in the same web browser window. These alternatives are most likely to work 99% of the time as in general in everyday programming they are not followed by any other statement. And rightly so, why would one write something which is not meant to be executed. This implies that you are completely safe even if:

1. Control comes back from Server.Execute and continues execution on the current page.
2. Response.Redirect transfers control to the other page and due to the presence of endResponse=true parameter, it continues execution on the current page.

just because we don't write any database processing, file processing or any other code logic that follows Server.Execute or Server.Execute. However, in a situation where you must directly use Response.End, you can call HttpContext.Current.ApplicationInstance.CompleteRequest method in lieu of Response.End which bypasses the code execution of Application_EndReqeust event (Reference: MSDN).

If you are thinking as I was thinking that it's a bug in .NET, then you are wrong. MSDN explains that this is not a bug but a behavior by design. However, I would really love to see that real world usage of this behavior in everyday programming that could justify this design decision.

References:

IIF(A,B,C) -- Beware it is a function not a statement!!

Learning how to write neat and clean is something many people wish to achieve. I believe it's an art which can be mastered with practice. I myself have this habit of keeping things brief and to the point, and I always tend to apply the same theory to my coding as well. However, today while writing a small piece of code in the similar consise manner for a small module, I found something interesting that made me realize the importance of understanding the difference between different programming constructs. I am talking about IIF(A,B,C) function and IF ELSE statement in particular here.

I had a situation where I was getting some value from the database and there was a fair chance of that value being NULL. I preferred IIF over IF ELSE to save 3 lines of code.

Dim lCVCount As Integer = IIf(IsDBNull(DataBinder.Eval(e.Item.DataItem, "IsCV")), 0, CInt(DataBinder.Eval(e.Item.DataItem, "IsCV")))

Mucy to my surprise, it didn't work and threw me exception. I kept wondering for a few minutes what is wrong with this. You know it is generally said that not knowing something is acceptable but forgetting something which you know is completely unacceptable and I agree with this. How can I forget that a function behaves differently than a statement. It passes parameter values to calle which means it must execute those parameters first, if they are found to be compund in order to get their value. This explains it all that why the third parameter in above statement is throwing exception. I thought I am better off using IF ELSE here in order to keep .NET runtime happy.

Sometimes It is good to get these kinds of errors which reinforces the fact that we should always be careful in what we write and specially when it comes to writing code!!

HTH,

File "Save As" issue while downloading with Content-Disposition: inline

It has been a long a time since I wrote anything on the blog. I was on a long holiday and (honestly speaking) wanted to stay away from the technology. But now, since life is back on rails, let's start with the very first issue I faced after I joined my workplace a couple of days ago. It is about an issue with the file name not being considered when using "inline" as HTTP response Content-Disposition header.

Downloading a file using Content-Disposition: attachment as HTTP response header displays a "File Download" dialog box asking the user whether to "Open", "Save" or "Cancel" the document. In certain scenarios this dialog box can be really painful specially when user has to repeatedly do this exercise for multiple links available on our website. In order to avoid this we use Content-Disposition: inline which opens the document without asking by using appropriate software like MS Word, Adobe Acrobat Reader etc. installed on the user's machine. This document can be saved later on using "Save As" option.

Before I move on, let's have a look on the following statement that shows the way to add Content-Disposition header to Response stream:

context.Response.AddHeader("Content-Disposition", "inline; filename=resume.doc")

The problem with the Content-Disposition: inline is that it doesn't pick the file name from filename attribute we use in AddHeader function rather, it always takes the name of web page from the URL while saving the document. Content-Disposition: attachment seems to be working fine even with the "Save As" command, but this is something I didn't want to use. I spent some time researching on this issue and couldn't found a workable solution. In order to workaround I came up with the below solution that seems to be doing the job.

I actually decided to use a URL for the document link that reflects the name of the document being downloaded. For example for the document "MyResume.doc", the URL I have used looks something like this:

<a href="Default/FileName/882bd3e9-5035-4946-8c2a-98ef0eccc6e0/uploads/MyResume.doc.aspx">Download File</a>

Did you notice ".aspx" at the end of the URL? It is very important to append it in the end in order to send the request back to server whenever user clicks on the link. On the server, I have created an HTTPHandler to catch this sort of request that contains "*.doc.aspx". Without appending ".aspx", browser will server your document directly without changing the URL in the browser. But, we want to change the URL which means request has to go to the server.

The rest of the solution is very easy as I have written below code under the body of ProcessRequest function in my HTTPHanlder which helps dump file contents into response stream. It is also important to note that URL of the document must be relative and not absolute. The reason being, it's a dummy URL which doesn't exist at all. Relative URL guarantees that request will come to the same web site/application where we can catch it before it goes to ASP.NET engine. QueryString data can be made part of the URL as in my case GUID is the piece of information which is part of the URL and I receive it on the server.


Public Sub ProcessRequest(ByVal context As System.Web.HttpContext) Implements System.Web.IHttpHandler.ProcessRequest

'Context.RewritePath("Test-Page.aspx")
context.Response.ContentType = "application/msword"
context.Response.AddHeader("Content-Disposition", "inline; filename=adeel.doc")
context.Response.TransmitFile("C:\inetpub\wwwroot\Uploads\882bd3e9-5035-4946-8c2a-98ef0eccc6e0.doc")
context.Response.Flush()
context.Response.End()

End Sub


The last bit is to add the below statement in web.config to enable HTTPHandler to catch "*.*.aspx" so that it will serve for all kinds of file type extensions.

<add verb="*" path="/uploads/*.*.aspx" type="Test.HttpHandler, Test"/>

XML Literals - (VB.NET only)

After I learned about lambda statements that they are supported only in C# 3.5 and not in VB.NET 9.0, I got curious and thought why not to find something that is available only for VB.NET and not for C#, and found this interesting thing which is called XML Literals. Using this nice features which is a part of LINQ to XML API, we can embed XML directly within Visual Basic 9.0 code.

To illustrate its power let's have a look at the below XML that we will produce using XML Literal.


<books>
<book>
<title>LINQ IN ACTION</title>
<author>FABRICE MARGUERIE</author>
<author>STEVE EICHERT</author>
<author>JIM WOOLEY</author>
<publisher>Manning</publisher>
</book>
</books>

now take a look at Listing 1.1 below, which shows the code for creating the XML using the XML literal syntax offered by VB9.

Listing 1.1

Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load

Dim booksElement As XElement = <books>
<book>
<title>LINQ IN ACTION</title>
<author>FABRICE MARGUERIE</author>
<author>STEVE EICHERT</author>
<author>JIM WOOLEY</author>
<publisher>Manning</publisher>
</book>
</books>
End Sub

Note that in the above code we are using XElement which is a new class introduced in .NET 3.5 which represents an XML element. According to MSDN:
"XElement can be used to create elements; change the content of the element; add, change, or delete child elements; add attributes to an element; or serialize the contents of an element in text form"
The XML fragment in listing 1.1 above is static. When building real applications we might need to create XML using expressions stored in a set of local variables. XML Literal allows us to do so through expression holes which is expressed with the .

Listing 1.2

Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load

Dim xml = LoadXML("LINQ IN ACTION", "Manning", "FABRICE MARGUERIE", "STEVE EICHERT", "JIM WOOLEY")

End Sub

Private Sub LoadXML(ByVal title As String, ByVal publisher As String, ByVal ParamArray authours() As String)

Dim booksElement As XElement = <books>
<book>
<title><%= title %></title>
<author><%= authours(0) %></author>
<author><%= authours(1) %></author>
<author><%= authours(2) %></author>
<publisher><%= publisher %></publisher>
</book>
</books>

End Sub

XML Literals allows us to embed XML directly within the code without having to learn the details of XML API. It's a great addition to VB9.0 and hope that it will get added to C# as well in future.



HTH,

From Delegates to Lambda Expressions

Delegates:

Delegate was a wonderful addition to .NET 1.1 in early 2000. It is a type that can store a pointer to a function. The below snippet shows the way we would use delegates in our everyday programming.
Listing 1.1

delegate DataTable GetUserDetailsDelegate(int userID);

class Program
{
static void Main(string[] args)
{
GetUserDetailsDelegate GetUserDetails = new GetUserDetailsDelegate(GetUserDetailsByUserID);
DataTable dt = GetUserDetails(1);
}

private DataTable GetUserDetailsByUserID(int userID)
{
return new DataTable();
}
}

Anonymous Methods

C#2.0 was improved to allow working with delegates through anonymous methods. I said C#, because VB.NET doesn’t offer support for anonymous methods. These anonymous methods allow us to write shorter code and avoid the need for explicitly named methods. Let's modify the code in Listing1.1 and re-write it using anonymous methods.
Listing 1.2
delegate DataTable GetUserDetailsDelegate(int userID);

class Program
{
static void Main(string[] args)
{
GetUserDetailsDelegate GetUserDetails = delegate(int userID) { return new DataTable(); };
var dt = GetUserDetails(1);
}

Lambda Expressions

Now, Starting with C# 3.0, instead of anonymous methods we can use lambda expressions.
Listing 1.3
C#
var GetUserDetails = userID => { return new DataTable(); };
var dt = GetUserDetails(1);
VB.NET
Dim GetUserDetails = Function(x) New DataTable()
Dim dt = GetUserDetails(1)

The anonymous method introduced in C#2.0 is verbose and imperative in nature. In contrast, lambda expressions provide a more concise syntax, providing much of the expressive power of functional programming languages. It is a superset of anonymous methods with additional functionalities like inferring types, using both statement blocks and expression as bodies etc.

Note that, In the above lambda expression left hand side variable is of anonymous type. Anonymous type is a new language enhancement that enable us to declare types without names. If you are concerned about their limitations and don't want to use them then you can use Func<T, TResult> generic delegate in lieu of anonymous types.

Listing 1.4
C#
Func<int,DataTable> GetUserDetails = userID => { return new DataTable(); };
var dt = GetUserDetails(1);
VB.NET

Dim GetUserDetails As Func(Of Integer, DataTable) = Function(x) New DataTable
Dim dt = GetUserDetails(1)



HTH,

Server.URLEncode an Encrypted String - do it twice my friend!!

Query string is a very important part of a web application which needs to be prevented from being sniffed or changed when it carries sensitive data. We do not have a magic wand that upon waving will hide the sensitive portion of the query string across multiple requests, but we do have powerful encryption algorithms that come as a rescue and encrypt everything which turns out to look like a toddler's handwriting. And, if it contains characters that are prohibited in a query string e.g. ('+','?',':','&','/','='), then we can encode it using Server.URLEncoding API which comes as part of .NET class library.

var encryptedString = CommonUtils.Encrypt("clientid=1980&code=alphabravo", lKey);
var encodedEncrypted = Server.URLEncode(encryptedString);

In the above code, CommonUtils is my homegrown encryption utility (symmetric key encryption) that takes lKey to encrypt/decrypt large quantities of data. But wait, I notice it's throwing an exception on the receiving side and not showing my properly URLEncoded query string that i generated while sending. Why o why!!

After a small research I found that I need to do Server.URLEncoding twice but not just once before sending it out. By just doing it once, I witnessed that all my URL encoded characters for ('+') were getting lost somehow. Anyways, the final correct sequence that I have come to know about encoding an encrypted string and decoding it is this:
  1. Encrypt it
  2. Encode it twice
  3. Decode it once
  4. Decrypt it
Happy coding!!

LINQ to SQL - Multiple result shapes

Working with LINQ to SQL, you might come across a situation where your stored procedure looks something like this:

CREATE PROCEDURE [dbo].[GetAllProductsAndCustomers]
@CompanyID INT
AS
SELECT [Code, Category] FROM Products
SELECT [Name, Email,Contact] FROM Customers

This SP returns multiple result sets which LINQ supports quite efficiently. However, If you work with auto-generated object relational mapper (*.dbml), you must have noticed that CLR cannot automatically determine which stored procedure returns multiple result shapes, hence creates a wrapper function with ISingleResult as a return type, which represents the result of a mapped function that has a single return sequence. For the above SP, it generated the below method signature:

<FunctionAttribute(Name:="GetAllProductsAndCustomers")> _
Public Function GetAllProductsAndCustomers(<Parameter(Name:="CompanyID", DbType:="Int")> ByVal CompanyID As System.Nullable(Of Integer)) As ISingleResult(Of GetAllProductsAndCustomersResult)
Dim result As IExecuteResult = Me.ExecuteMethodCall(Me, CType(MethodInfo.GetCurrentMethod,MethodInfo), companyID)
Return CType(result.ReturnValue,ISingleResult(Of GetAllProductsAndCustomersResult))
End Function

In order to turn this situation into our favor and handle multiple result shapes returned by the stored proc, all we need to do is replace the ISingleResult with IMultipleResults and supply the appropriate result types. If in case there is no specific result type, which is quite possible if stored proc is generating columns from multiple tables as a result of join, you can provide any names and LINQ will treat them as anonymous types. In the method signature below, I have created two classes GetAllProductsAndCustomersResult1 and GetAllProductsAndCustomersResult2 along with the properties Code and Category in the first and Name, Email, Contact in the second class.

One important thing, for the kind of SP we are using we need to read the result shapes in the same sequence as the SP returns the results. The order of IMultipleResults.GetResult() should be same as the order of SELECT statements in SP in order to avoid getting unexpected results or errors and exceptions if our IEnumerable result set is bound to a data source control.

The modified method signature will look like:

<FunctionAttribute(Name:="GetAllProductsAndCustomers"), _
ResultType(GetType(GetAllProductsAndCustomersResult1)), _
ResultType(GetType(GetAllProductsAndCustomersResult2))> _
Public Function GetAllProductsAndCustomers(<Parameter(Name:="DataBridgeQueueID", DbType:="Int")> ByVal companyID As System.Nullable(Of Integer)) As IMultipleResults
Dim result As IExecuteResult = Me.ExecuteMethodCall(Me, CType(MethodInfo.GetCurrentMethod,MethodInfo), companyID)
Return CType(result.ReturnValue,IMultipleResults)
End Function

HTH,

VB.NET Nullable Types Gotcha

Nullable types ― A valuable language addition in .NET allowing value types to store null values. It helps a variable get it's default value which is not part of the business logic. A very common example is integer type whose default value is always set to 0 restricting programs using 0 as a legitimate value. However, Nullable types take away this restriction by allowing null values to be set for value types like reference types.
While using nullables we need to be very careful specially when programming in VB.NET where implicit casting is frequently used. Consider the example below:



ByVal temp As Nullable(Of Integer)
temp = Request.QueryString("DataBridgeQueueID")


Guess what value "temp" will get if there is no "DataBridgeQueueID" passed in the query string?? a nurd like me will expect Nothing in "temp" which is wrong. The reason being that QueryString always returns string and in my case null (Nothing) string which is equal to 0 when assigned to an integer. Although I have declared "temp" variable as nullable, it will continue to have 0 because of implicit type casting. So what to do then?? You are right, just do a CType and life is easy.

'If Option Strict On
temp = CType(CType(Request.QueryString("DataBridgeQueueID"), Object), Nullable(Of Integer))

'If Option Strict Off
temp = CType(Request.QueryString("DataBridgeQueueID"), Object)



HTH,

Ajax based multiselect dropdown control

As the saying goes.."a picture is worth a thousand words". Checkout the pictures below and download control if you like.

Collapsed View-1:





Expanded View-1:

















Expanded View-2:

















Click here download the control.


Uploading files using HTTPWebRequest

I want to upload a file but can't use html input file control as I don't have any html page in my application. I am into a situation where i want to utilize all the power of HTTP protocols and internet media types like POST and multipart/form-data respectively without using any html control. This situation is not very rare and most likely you will face it while creating an automatic batch file upload utility where this is no chance of manual intervention hence no possibility of using html input file control.

Since I am using .NET, I can use low level HTTPWebRequest to upload files to webserver. This class helps simulate manual upload using ContentType=multipart/form-data. multipart/form-data can be used for forms that are presented using representations other than HTML (spreadsheets, Portable Document Format, etc), and for transport using other means than electronic mail or HTTP.

An important aspect of uploading files is the way web request is structured where every form value has to start with --{boundary} and full web reqeust has to end with --{boundary}--. This can be seen by debugging the request using any web request debugging tools like Fiddler while uploading through html input file control. We will have to immitate the same request format in order for it to be accepted by web server. Let's quickly mention some of the important parts of this request that we need to set in order to successfully upload a file.

  • ContentType: Multipart/form-data
  • Request Method: Post
  • Content-Disposition: form-data
  • Boundary: Guid

The reason to use multipart MIME type is that its is used by file upload which has a more complicated than a simple post. For a detailed explanation you can read the RFC2388 - Returning Values from Forms: multipart/form-data. For boundary we can use Guid as this is value that should not occur in any of the form value.

The funciton UploadFile() mentioned below uploads a file along with 2 other form values using the parameters mentioned above. Apart from them, HTTPWebRequest.Headers can also be used to supply credentials if our server requires authentication.

Request: (UploadFile.aspx)

Private Sub UploadLocalFile()

Dim objWebReq As HttpWebRequest
Dim memStream As MemoryStream = Nothing
Dim writer As StreamWriter = Nothing
Dim newLine As String = vbCrLf
Dim fileContent As Byte() = Nothing
Dim fileText As String = Nothing
Dim rStream As Stream = Nothing
Dim webResponse As HttpWebResponse = Nothing
Dim stReader As StreamReader = Nothing
Dim output As String = Nothing

Try

Dim boundary As String = Guid.NewGuid().ToString().Replace("-", String.Empty)
Dim obj As Object = HttpWebRequest.Create("http://localhost:3860/ReceiveFile.aspx")

objWebReq = CType(obj, HttpWebRequest)

objWebReq.ContentType = "multipart/form-data; boundary=" + boundary
objWebReq.Method = "POST"
objWebReq.Headers.Add("loginName", "{username}")
objWebReq.Headers.Add("loginPassword", "{password}")

memStream = New MemoryStream(100240)
writer = New StreamWriter(memStream)

'Feed ID
writer.Write("--" + boundary + newLine)
writer.Write("Content-Disposition: form-data; name=""{0}""{1}{2}", "FeedID", newLine, newLine)
writer.Write("10250")
writer.Write(newLine)

'Feed Category
writer.Write("--" + boundary + newLine)
writer.Write("Content-Disposition: form-data; name=""{0}""{1}{2}", "FeedCategory", newLine, newLine)
writer.Write("Automotive")
writer.Write(newLine)

'XML File
writer.Write("--" + boundary + newLine)
writer.Write("Content-Disposition: form-data; name=""{0}""; filename=""{1}""{2}", "XMLfile", "C:\Users\Irfan\Desktop\feed-automotive.xml", newLine)
writer.Write("Content-Type: text/xml " + newLine + newLine)
writer.Write(File.ReadAllText("C:\Users\Irfan\Desktop\feed-automotive.xml"))
writer.Write(newLine)

writer.Write("--{0}--{1}", boundary, newLine)

writer.Flush()

objWebReq.ContentLength = memStream.Length
rStream = objWebReq.GetRequestStream()
memStream.WriteTo(rStream)
memStream.Close()

webResponse = objWebReq.GetResponse()
stReader = New StreamReader(webResponse.GetResponseStream())
stReader.ReadToEnd()
output = stReader.ReadToEnd()

Response.Write(output)

Catch ex As Exception

Response.Write(ex.Message)

End Try

End Sub

Response: (ReceiveFile.aspx)

Private Sub ReceiveFile()

Dim reader As System.IO.StreamReader = Nothing

Try

If Not Reqeust.Files.Count > 0 AndAlso Not Request.Files("XMLFile") Is Nothing Then
reader = New System.IO.StreamReader(Request.Files("XMLFile").InputStream)
System.IO.File.WriteAllText("c:\feed.xml", reader.ReadToEnd())
Else
Response.Write("No feed file received in the request.")
End If

Catch ex As Exception
Response.Write(ex.Message)
End Try

End Sub

Checklist for Multilingual website development

I sometimes wish if i could have a checklist or to do list to follow before actually starting developing something new. Checklists are great as they, in certain situations, prove to be more helpful than full length articles due to their short and precise nature. The checklist I am going to propose in this post is solely for those who want to have some idea about multilingual websites and the core steps involved in developing them. By just skimming through the steps below, one can calculate the amount of time required to translate a website into a another language. This checklist is nothing but merely a set of suggestions and recommendations based upon my very recent experience that I had while translating a website from English to Arabic.
  • First and foremost, a seperate style sheet can be created for each language. I have created two in my project en-US-main.css and ar-SA-main.css. Using a separate style sheet while doing the translation guarantees that the existing design and layout will remain intact.

  • We should never embed any content or label text/values directly into html page like this:
    <p>embedded text.</p>
    <input id="lblFirstName" type="label">First Name</input>
    <input id="lblLastName" type="label">Last Name</input>

    By using the above approach, we take away the privilege that we get through resource files to switch culture specific content at runtime. It is a best practice to create language specific resource files like *.RESOURCE or *.RESX which help setting labels and messages on the fly. In the below code i am calling GetLabelTring() function which pulls labels values either from en-US.resouces file or ar-SA.resources based upon the selected culture.
    <p>GetLabelString("L1000") </p> {html side}
    lblFirstName.Text = GetLabelString("L1001") {ASP side}
    lblLastName.Text = GetLabelString("L1002") {ASP side}

  • The current culture can be saved into session variables which will help maintain the same language across all of the pages for a specific session.

  • The labels and messages can be cached if the frequency of their modification is low which will result in better performance and speed.

  • We can use HTML DIR attribute if the language we are targeting is written right to left such as Arabic and Hebrew. This attribute specifies the base direction (RTL, LTR) of text, or sections of text.

    HTML Markup









    Resulting Display










    We can also specify a single base direction for all of the content available on the website by using the DIR attribute with HTML tag.

  • For everything else on the page other than content can be controlled through style sheets. If our website is blessed with DIV-based layout, we can be rest assured that the style sheet attributes such as float, background-position and margin/padding etc. will take care of the direction (right-to-left / left-to-right) for most of the HTML elements.

  • That’s all from the HTML side. I would like to add a couple of bits here in regards insertion of multilingual text in the database table. The idea is to insert a separate row for every culture and do it for all of the content pages of the website. Of course, we need to enable the Unicode support on the table.


    RecID

    PageName

    PageText

    LangID

    PageID

    PageTitle

    101

    My Website Home Page

    Welcome to my page

    1

    2002

    Home Page

    102

    الصفحة الرئيسية الموقع الخاص بي

    مرحباً بك في الصفحة الخاصة بي

    2

    2002

    الصفحة الرئيسية (Home Page)



  • Last but not least, there are many free translation tools available on the internet that can be of great help for cross-checking the content between the languages. Two such great tools that I have extensively used are Bing Translator and Google Translate.

Podcast - Vernon Bryce from Kenexa

Vernon Bryce, the managing partner of Kenexa was interviewed by dubaieye1038 radio station. Use the blow link to check it out.

Click to download the podcast

Default SMTP Server in Vista

If you are one of those who are running their development environment in Windows Vista and like to test email code, then you might be disappointed to know that default SMTP server is not included in vista. It used to be a part of Windows XP, but for some good reason Microsoft decided to take it out. The only reason I can think of is that Vista is not intended be used as a production server. It is suggested that developers using Vista in a development environment should target IIS6/Windows Server 2003 or IIS7/Longhorn Server in beta or release form starting next year.

Now, to test the email code there are multiple alternatives proposed over the internet. One of them is to use free smtp server like windows Free SMTP Server  and  SmarterMail

The solution which I opted was to target default virtual SMTP server in Windown Server 2003 which is almost always available as a testing server (local server) in software development houses. In order to successfully relaying an email message, you might need to provide authentication settings. Here is the picture of my settings on Vista to send email to local testing server (sh-sv-1) running windows 2003.


No more DAV Protocol for Outlook

Microsoft just announced that it is preparing to stop using DAV protocol for Outlook® which means that all the outlook users either will have to switch to Windows Live Mail (recommended) or change their outlook settings to use well known POP3 protocol to communicate with the mail server.

Get the full news here:

Alternatives to DAV for accessing Hotmail on Outlook, Outlook Express, or Entourage


Migrating from .NET 1.1 to 3.5

Everybody likes to embrace the latest technology. We start hunting for it as soon as we realize the existing tool/technology is growingolder or not delivering up to the expectations. However, We should not forget the difficulties that come across the transition path. Sometimes, tools fail to deliver the way in which they are marketed. For me, one such tool was Visual Studio 2008.

I recently migrated a fairly big enterprise application from .NET 1.1 to 3.5. I was very happy as the decision was in the favor of both business as well as the development team. I decided to play intelligently and migrated a separate copy of the application beforehand to use it as a reference in future. Obviously, I was supposed to face problems as migration directly from 1.1 to 3.5 is not suggested on the forums, but accroding to my experience, it is not a big deal as long as your application has a good architecture and you know how to use Visual Studio Find and Replace utility:).

The purpose of writing this post is to inform the readers of the steps they should take to avoid spending too much time on migration. I encountered following errors before and after installing Visual Studio 2008 SP1. Make sure you run though the list below before you migrate your web application to .NET 3.5.

  • Thevery first problem you might face is that the .designer.vb has not been generated for all of the web pages in your application after running the automatic migration wizard for the first time. You need to right click on the project and select "Convert to Web Application" many times until you see all of the designer in the project directory.
  • If you stop the migration process half way through due to any reason, then make sure you follow the above step again or IDE will show you designer files excluded from project.

  • Stopping the process half way through might also show you some *.vb files with *.vb.old extension. These are the files that got successfully created last time. You can simply delete them anytime duringor after the exercise.

  • At one point, upon compiling the project, I got many "[Variable_Name]not declared" errors. After digging into the code, I spotted some of the class declarations looked like [Partial] Public Class [Class_Name]. If you witness the same thing then all you need to do is to replace [Partial], which is not a key word in .NET, with Partial using Visual Studio Find and Replace in Files. You can press Ctrl+Shift+H for fire up the utility.

  • The biggest of all problems I faced immediately after installing VS2008 SP1 was the way IDE created the designer files. I started receiving hundred of "[Variable_Name] is already declared as 'Protected[Variable_Name] As [Class_Name]' in this class" errors. Upon investigation, I found that Visual Studio 2008 IDE didn't remove control declarations from *.vb file after creating the same in *.designer.vbwhile converting to web application resulting in a number of multiple declaration errors. I couldn't find any shortcut to get rid of that and had to manually do it for every file.
  • One of the syntax changes that has been introduced in .NET 3.5 is the way it generates child control declarations in HTML. While using DataRepeater and DataGrid on a web page, following changes caught my attention:

    repeater1__ctl0_checkbox1 (.NET 1.1)
    repeater1_ctl00_checkbox1 (.NET 3.5)

  • One of the good things about the latest version of .NET framework is the availability of security features like Event Validataion and Request Validataion. The former reduces the risk of unauthorized postback requests and callbacks and the later prevents from cross site scripting (XSS) by rejecting any HTML in form post. Although both of the features can be disabled at the page level, but it is not recommended generally for security reasons.I took advantage of the event validation by overriding the Render method of the base page:
    Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)
    ClientScript.RegisterForEventValidation([control].UniqueID, String.Empty)
    MyBase.Render(writer)
    End Sub
    However, for request validataion, I had to disable it as my application needs to accept some HTML elements. Check out the below link for more details:
    Protecting Against Script Exploits in a Web Application

So much, so far. I will keep posted more erros as they come. If you have faced any error which is not in the above list then do share it with me, I will add them here.

Relation b/w SQL Server transaction isolation levels and locks

I had a very good discussion with a colleague of mine at work on the impact of SQL statements under the scope of a transaction. We were trying to optimize a stored procedure for the minimum execution time. While going through the SP at one point, We found a SELECT statement followed by an UPDATE statement inside a transaction, something like this:

BEGIN TRANSACTION

SELECT* from dbo.authors WHERE au_fname LIKE 'Johnson'

UPDATE authors SET au_fname = 'Johnson1' WHERE au_id = '172-32-3176'

COMMIT TRANSACTION

My colleague was of the view that, one should always keep the transaction as small as possible and SELECT statement should not unnecessarily be made part of the transaction. This helps ensure the locks will be held for a minimum period of time and maximum availability of the table or rows to other transactions. I couldn't disagree with him on this.

He added that, if the SELECT statement is not contributing in the overall outcome of the transaction, then there is no point in having it inside the transaction. In light of this argument, we can easily take the SELECT statement out of transaction I have mentioned above. However, part of his argument spurred me to do a small research on SQL Server transaction isolation level and locking where he said that all the locks will be held and table/rows will be unavailable for other transactions right from the start of the transaction.

According to my understanding, transaction isolation level defines the behavior of locks. SELECT statements acquire SHARED LOCK while UPDATE, DELETE and INSERT statements acquire EXCLUSIVE. While executing SELECT under READ COMMITTED isolation level which is the default for SQL Server, locks will immediately be released as soon as the execution of SELECT statement is finished. SQL Server will not wait for the transaction to be over and will allow other transaction to modify the table or rows. However, in case of REPEATABLE READ isolation level, every other process will have to wait for the locks regardless of SELECT statement has been executed or not. The below definition of REPEATABLE READ from MSDN helps use understand this point.

"REPEATABLE READ specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data (but can be add) that has been read by the current transaction until the current transaction completes"

BEGIN TRANSACITON plays no role in defining the scope of the locks which are not applied until SQL Server reaches to a particular statement (SELECT, INSERET, UPDATE, DELETE). Based upon the definitions of different isolation levels and locks, we can easily understand the association b/w them. In the light of this analysis, for the above transaction, I reached to an understanding that, it will make no difference whether we keep the SELECT statement inside or outside the transaction. It might take longer to come back because of the SELECT statement but, it will not prevent other transaction from acquiring locks until it reaches to UPDATE statement. I supported my argument by running the following test:

Process 1:

Use Pubs

SET TRANSACTION ISOLATION LEVEL REPEATABLE READ

BEGIN TRANSACTION

select * from authors WHERE au_fname like 'Johnson' /* Statement 1 */

Insert Into authors values('172-32-4188', 'ben', 'Johnson', '406 496-7229', 'address', 'city', 'CA', 94445, 1) /* Statement 2 */

select * from authors WHERE au_fname like 'Johnson' /* Statement 3 */

Commit Transaction

Process 2:

Use Pubs

Insert Into authors values('172-32-4188', 'ben', 'Johnson', '406 496-7229', 'address', 'city', 'CA', 94445, 1) //will get executed

UPDATE authors set au_fname = 'Johnson1' where au_id = '172-32-3176' /* will have to wait until transaction in Process 1 is finished.

Enterprise Library customization

Microsoft Enterprise Library is undoubtedly a great resource to support your .NET development. It is a bunch of some excellent tools and programming libraries to facilitate best practices in core areas of programming including data access, security, logging, caching, exception handling and others. Using the library without feeling the need to change any of the code is fairly easy and simple. However, while customizing it, we need to take special care as we might need to touch upon many areas than just source code. Today, I went through this customization process myself and faced a few difficulties which are worth mentioning here in order to help others avoid the same. I wanted to modify the contents of {Message} token for text formatter template and what I experienced was:

  • EntLib41Src folder is the point to start from which has got everything including blocks source code, scripts to build the modified libraries, quick starts etc. We are not supposed to touch actual Enterprise Library installation folder.

  • Before we actually proceed, we can open QuickStarts projects to help ourselves identify the file and method that we wish to change.

  • Now, we modify the code. I commented the call to WriteDescription() and WriteDateTime(DateTime.UTCNow) functions inside Format method and call to this.WriteSource and this.WriteHelpLink functions inside WriteException method in ExceptionFormatter.cs file.

  • We need to create a strong name key in order to generate strong-name EL assemblies. Check this out here: Strong Naming the Enterprise Library Assemblies

  • We can execute BuildLibrary.bat and CopyAssemblies.bat in order to build the modified assemblies and copy them to EntLib41Src\bin\ respectively.

  • Reference the assemblies in the project.

  • Make sure from that point onwards, we use the right copy of Enterprise Library Configuration tool which is inside EntLib41Src\bin\EntLibConfig.exe otherwise, we might end facing errors and exceptions related to assembly versioning and manifest definition mismatch as Tom Hollander describes it in his blog here: Avoiding configuration pitfalls with incompatible copies of Enterprise Library

Khallas! we are done.

Can't delete/rebuild full text catalog

A couple of days ago while working with SQL Server, I found myself in a catch 22 situation .We recently moved our database server and copied all objects including full text catalogs to a new location. While enabling the full text search on the new database, I realized that full text catalogs are still pointing to old location and needs changing. Frankly speaking, I didn't know how to do that and decided to drop the index and recreate as an easy way around it. However, in an effort to do so, I got an error saying "Full text is not enabled on the database. Run sp_fulltext_database 'enable'". When I ran that sp, I got another error saying "F:\MSSQL\FTData\SH_FT_CA..." doesn't exist". It was interesting enough to know that I couldn't drop the catalogue because of being disabled, but couldn't enable it either as it didn't exist at all. Wow! What a deadlock.

Well, with a little effort, I figured out a manual work around which is nothing but running an update command on sysfulltextcatalogs table. However, before I do that, I had to enable the updates for the current server by executing sp_configure for 'allow_updates' option. Long story short, following set of queries helped me update the path and eventually made it possible to drop the catalogues.


SELECT * FROM sysfulltextcatalogs

SP_CONFIGURE 'allow_updates', 0 GO RECONFIGURE WITH OVERRIDE GO

UPDATE SYSFULLTEXTCATALOGS SET path = 'D:\Microsoft SQL Server\MSSQL\FTData' WHERE ftcatid = 5

RowFilter in presense of DataRelations

We know the role of DataRelation class in .NET − It helps create parent-child relationship between tables in a dataset. We also know that it is an an association which can be formed by declaring primary and foreign key columns while creating the relationship. I have seen many examples of using this class, specially the one in this article on MSDN:

DataRelation custOrderRel = custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustomerID"],
custDS.Tables["Orders"].Columns["CustomerID"]);
foreach (DataRow custRow in custDS.Tables["Customers"].Rows)
{
Console.WriteLine(custRow["CustomerID"]);
foreach (DataRow orderRow in custRow.GetChildRows(custOrderRel))
Console.WriteLine(orderRow["OrderID"]);
}
The above code is not doing anything but creating a relationship "CustOrders" and looping through parent and child rows to dump their contents to console. The GetChildRows() method works pretty well as long as no explicit filter is applied on any of the related tables in the dataset. Let's assume that the Customers table contains 3 customers and Orders table stores some orders of these customers. In the above mentioned situation, when no filter is applied, the output may look like:
    

Customer ID

Order ID

1

101

1

102

2

201

2

202

2

203

3

301

3

302


However, consider the situation where we might only be interested in list of orders that belong to CustomersID = 1, CreateChildView() is the method to go with. This method helps get rows from the child table that are associated with the filtered rows from the parent table.

DataRelation custOrderRel = custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustomerID"],
custDS.Tables["Orders"].Columns["CustomerID"]);

DataView dv = custDS.Tables["Customers"].DefaultView;
dv.RowFilter = "CustomerID = 1";

foreach (DataRowView drv in dv)
{
Console.WriteLine(drv["CustomerID"]);
for(DataRowView orderRow in drv.CreateChildView(custOrderRel))
Console.WriteLine(orderRow["OrderID"]);
}

Customer ID

Order ID

1

101

1

102

 

Sort By Column Value

We do sorting by column names almost everyday while writing database queries. Below is one such simple query

SELECT customerid, employeeid, orderdate  FROM dbo.orders  ORDER BY customerid

But, today I faced a situation where I had to write a query that returns all the rows however some of the rows appeared on the top in the list if they match a specified criteria. Lets take the example of above query where we want to get a list of all the customers but customers having customerid 'VINET' should appear on top. I came up with the below query:

DECLARE @customerid AS VARCHAR(100)  SET @customerid='VINET'  SELECT customerid, employeeid, orderdate,  (CASE @customerid WHEN '' THEN  null WHEN customerid THEN customerid END) AS 'sortColumn'  FROM orders  ORDER BY sortColumn DESC, employeeid

In the above query, 'sortColumn'  is a temporary column that is being used just to sort the whole result set by its value. 

This might not be a perfect solution as I am not a SQL expert but, I am content with it for the time being as long as it servers the purpose. If you think there is a much better way of doing the same thing, then you are more than welcome to share it here.

SSAS Server Time Dimension

While working with SQL Server Analysis Services, I found server time dimension quite interesting because of the fact that the data for this dimension does not come from a dimension table in the data warehouse, but it is generated by Analysis Services and stored in a proprietary file structure on the server. We simply specify the beginning date and end date of the dimension, select the time periods to include such as year, quarter, month, or date, and choose the special calendars, if any, to add to the dimension. When we create a Server Time dimension, no such table is created but data available to this dimension type is maintained solely by Analysis Services. In order to use this dimension with a fact table, we will need to have a date/time column instead of a dimension key in the fact table.

It is important to understand that Analysis services will use only the date part of this column to join the fact table with Server Time dimension which means we need to remove the time part from the date/time column in our data warehouse table. This can easily be achieved by creating a Named Calculation in our data warehouse table. Using a Named Query or a Named Calculation gives us the ability to manipulate the data structures for use by Analysis Services even if we don’t have permissions to make similar changes at the database level.

Named Calculation can be created by following the below steps.
  • In Solution Explorer, double-click SSAS Step by Step DW.dsv to open the Data SourceView Designer.
  • Right click the table that will have the desired Named Calculation and select New Named Calculation.
  • In the Column name write a name of your choice. Note that this Name Calculation will act as a column for the table and will be used instead of date/time column in the table.
  • In the Expression column, we can to use SQL CONVERT function to get the date only part of the date/time column.

    CONVERT(varchar(11), CreatedDate, 20)




    N.B. in the above statement we are trying to convert a date/time column (CreatedDate) into a format that will only have date part. We must use 20 in the third parameter which represents yyy-mm-dd format. This is used by server time dimension which it generates dates to be stored in a file.  

That is it. We can now use Named Calculation (Calendar - in my case here), anywhere in our OLAP cube.

Configuration Files for Class Library Projects



While working with Windows Forms and web form applications in .NET 1.1, we realize that we can easily load application settings from configuration files. .NET configuration architecture made it very easy to load these files and read them in the application at runtime. But there are times when we develop complex business components and those must have their own configuration related data. Since these library components are independent of applications in which they are loaded, it makes sense that these components must have their own ConfigurationManager. This article explains how we can develop Configuration Manager for a DLL...Read More

MultiSelect Dropdown Control

For an Ajax-based Multiselect dropdown, please visit this link


It’s an easy to use and lightweight control. The code is also fairly simple to understand. I have developed it using .NET and C# on the server side and JavaScript for client side scripting. Although it’s not a full blown web server control, it does provide with some useful features that help users in displaying and managing information on the page easily. Following is a small list of those features... Read More