One of the J2ME applications I had been involved in emerged with a big problem when tested for boundary conditions. When the client was sending less than 2048 bytes, the server was able to handle it correctly. The content-length header was being set in the client correctly and was available at the server. The J2ME client uses javax.microedition.io.HttpConnection class to make HTTP connection to the application server. Here is the code snippet:

[code lang=”java”]
HttpConnection httpCon = (HttpConnection) Connector.open(url, Connector.READ_WRITE);
httpCon.setRequestMethod(HttpConnection.POST);
httpCon.setRequestProperty("Content-Type","application/x-www-form-urlencoded");
//Set Headers
httpCon.setRequestProperty("Content-Language", "en-US");
httpCon.setRequestProperty("Accept-Language", "en");
//Send request
if (data != null) {
int len = data.length();
httpCon.setRequestProperty("Content-Length", Integer.toString(len));
os = httpCon.openDataOutputStream();
os.write(data.getBytes());
os.close();
}
[/code]

When the client data exceeds 2048 bytes I noticed from the Network Monitor tool of Wireless Toolkit that the content-length header would vanish and another header transfer-encoding attribute with value as “chunked” was being inserted. A little digging revealed that the problem was a known one and referred to as HTTP chunking, which means in brief that HTTP1.1 provides for chunked encoding letting large messages to be split into smaller chunks thus paving way for persistent connections. There were some previous discussion with no headway here and here. Furthermore, none discuss on how to deal with the problem on server side. Notice that the client side code handles the chunking problem as advised on the mentioned source.

I discussed this with Eric Giguere who said that

The WTK does indeed switch to chunked encoding once the data you post goes over 2K. Chunked encoding is part of the HTTP 1.1 specification and so the WTK expects the web server to deal correctly with it. However, web servers seem to vary a lot in their handling of it. There's nothing you can do about this, unfortunately, you have to use a web server that handles the chunked encoding properly.

Thing that perplexes me is the fact that my application server Websphere 4.x does support HTTP 1.1, it was even able to send data in chunks to my J2ME client, which was able to handle it nicely but the reverse was not true. In fact Network Monitor told me that as soon as data becomes even 2049 bytes the headers were sent correctly (again with no content-length and with chunked transfer-encoding) however the body was just empty. Since the body was received empty, my Servlet was unable to procure any request parameter and the request bombed.

I could have even digested that the Network Monitor tool is erroneous or the problem is only happening when tested on the emulators because the problem was detected on the actual device. Ultimately we had to curtail the amount of data we send at one go to solve the problem but as you might have guessed by the post I am looking for some sane explanation of this.