Piping problem

I recently had a task wherein I was required to call few Perl routines from inside my Java class. You must be saying, what’s the big deal? Runtime.exec is what one needs. You are dead right, this comes to rescue when ones needs to call a non-Java process or an executable. With this, I was able to call most of the said routines, except a few that were using “Piping”.

Now, a pipe is a one-way I/O channel which can transfer a stream of bytes from one process to another, meaning that the standard output for one process serves as a pipe to another program; anything you print to STDOUT will be standard input for the other process.

With a piped routine call as follows, only routine1 got executed and routine2 didn’t.

./ param1 | ./ -directory  param2

Some search revealed that people were facing problems too and I had no clue as to how I could use the Standard output from the Process object to feed to second routine.

I finally found the solution here, like the light at the end of tunnel. As the article says, Runtime.exec invokes actual executable binary programs. Syntax such as pipe (|) and > are part of a particular command processor, and are only understood by that processor. So in such calls, the command preceding the pipe is executed, but the rest of the shell command is not. Moreover, invoking the process with single string argument, say as “sh -c ‘./ param1 | ./ -directory param2′” doesn’t work because the String passed to exec method is split apart into tokens without caring for the single inner quotes.

Without further ado, let me reveal that invoking the process as follows is what works:

String param = "./ param1 | ./ -directory  param2";
Runtime runtime = Runtime.getRuntime();
String[] args = new String[]{"sh", "-c", param};
Process p = runtime.exec(args);

Generics Preview

Generics is the most profound of changes envisaged in Tiger. Greg Travis has this nice preview on it.

In summary Generics offer the following advantages as per Greg:

  1. Better compile-time type checking: With generics, the type casting is implicit in the instantiation you are using, and it’s done at compile-time. By using a particular instantiation, you are in essence saying that this Object is really going to be a String, and the compiler will verify whether this is consistent with everything else going on in the program.
  2. Convenience: Casting can be irritating, it also makes code harder to read, since it can turn a simple assignment (or parameter-pass) into a more complicated expression. With generics, casting just goes away.

The chunked problem

One of the J2ME applications I had been involved in emerged with a big problem when tested for boundary conditions. When the client was sending less than 2048 bytes, the server was able to handle it correctly. The content-length header was being set in the client correctly and was available at the server. The J2ME client uses class to make HTTP connection to the application server. Here is the code snippet:

HttpConnection httpCon = (HttpConnection), Connector.READ_WRITE);
//Set Headers
httpCon.setRequestProperty("Content-Language", "en-US");
httpCon.setRequestProperty("Accept-Language", "en");
//Send request
if (data != null) {
	int len = data.length();
	httpCon.setRequestProperty("Content-Length", Integer.toString(len));
	os = httpCon.openDataOutputStream();

When the client data exceeds 2048 bytes I noticed from the Network Monitor tool of Wireless Toolkit that the content-length header would vanish and another header transfer-encoding attribute with value as “chunked” was being inserted. A little digging revealed that the problem was a known one and referred to as HTTP chunking, which means in brief that HTTP1.1 provides for chunked encoding letting large messages to be split into smaller chunks thus paving way for persistent connections. There were some previous discussion with no headway here and here. Furthermore, none discuss on how to deal with the problem on server side. Notice that the client side code handles the chunking problem as advised on the mentioned source.

I discussed this with Eric Giguere who said that

The WTK does indeed switch to chunked encoding once the data you post goes over 2K. Chunked encoding is part of the HTTP 1.1 specification and so the WTK expects the web server to deal correctly with it. However, web servers seem to vary a lot in their handling of it. There's nothing you can do about this, unfortunately, you have to use a web server that handles the chunked encoding properly.

Thing that perplexes me is the fact that my application server Websphere 4.x does support HTTP 1.1, it was even able to send data in chunks to my J2ME client, which was able to handle it nicely but the reverse was not true. In fact Network Monitor told me that as soon as data becomes even 2049 bytes the headers were sent correctly (again with no content-length and with chunked transfer-encoding) however the body was just empty. Since the body was received empty, my Servlet was unable to procure any request parameter and the request bombed.

I could have even digested that the Network Monitor tool is erroneous or the problem is only happening when tested on the emulators because the problem was detected on the actual device. Ultimately we had to curtail the amount of data we send at one go to solve the problem but as you might have guessed by the post I am looking for some sane explanation of this.