(experimental source/binary patch below for the impatient)
While setting up clustering with Glassfish 3.1.1, we noticed that our EAR application took ages to deploy: 350 seconds on the DAS, where it isn't even started. At first, I didn't understand why: the same application deploys in about 120 seconds on our production infrastructure, using similar machines.
Now, the strange thing is that it seems to spend most of the time unzipping the application (90 mb ear). I did some quick tests, and could see that the same operation (unzipping all 41 modules) took 2.5 seconds from command-line.
After digging in Glassfish sources, I could find a way to improve things.
Here is how ear deployment works now:
for each module M of ear
get the name of the files that are in module M
for each filename F
extract(M, F)
Now, the method which extracts the file seems to work like this:
find M in ear
for each file F' in M
if F==F'
return F
It seems that, although the extract method is faster than I'd think, it's still not fast enough. Especially, I believe the big-O notation for all of this is of O(n²).
As another quick test, I tried to change the extract method to be faster. With something like this:
if M isn't on the disk somewhere, create a temp file T and write the content of M there
T.getEntry(M,F)
return F
Now, I used java.util.jar.JarFile.getEntry(). It seems that this one is much faster. I don't know what its footprint is, but it seems it is quite fast, as the deployment time was now much lower:
EPCFull was successfully deployed in 32,283 milliseconds.
This is more than a 10:1 improvement :-)
And there is more: I don't see any reason for it to take much more than from command-line. So, this big app could deploy in... 2 seconds (I leave the starting of the application for now, I know this will take time). A developer's dream!
Indeed, a quick sample with the same archive and java.util.jar shows that there is room for further (big) improvements.
So, why does it work like this in Glassfish?
Actually, there is some sense to it. A couple of abstraction layers make it possible to use the same code for different scenarii. Glassfish copies files from a ReadableArchive to a WritableArchive. They could be anything, from a jar to a directory or /dev/null.
In our case, our ReadableArchive is implemented by InputJarArchive. The class responsible for deployment, GenericHandler.java in module internal-api, doesn't know about the implementations details. It just asks for a list of filenames; then asks for a copy of each file, one at a time. This leads to this O(n²) problem.
How to improve that, much further than the patch I applied (and without temp files)? Well, the same module which has an implementation of JarArchive (deployment-common), could have a method which copies the whole jar to some destination (a WritableArchive), at once. This requires modifications in at least three modules, but would give a lightning fast unzipping, which seems to be the biggest part in our case.
(UPDATE: I did just that, and now have 12 seconds for unzipping, which is good but could go down to 4)
(btw, this isn't related to http://java.net/jira/browse/GLASSFISH-17094)
Here is my Glassfish patch with temp files, which already gives some big improvement in our case:
(module deployment-common, com.sun.enterprise.deployment.deploy.shared.InputJarArchive.java)
# This patch file was generated by NetBeans IDE
# It uses platform neutral UTF-8 encoding and \n newlines.
--- Base (BASE)
+++ Locally Modified (Based On LOCAL)
@@ -61,6 +61,7 @@
import java.util.zip.ZipEntry;
import java.net.URI;
import java.net.URISyntaxException;
+import org.glassfish.api.deployment.archive.WritableArchive;
/**
* This implementation of the Archive deal with reading
@@ -247,27 +248,40 @@
}
} else
if ((parentArchive != null) && (parentArchive.jarFile != null)) {
- JarEntry je;
- // close the current input stream
- if (jarIS!=null) {
- jarIS.close();
- }
-
- // reopen the embedded archive and position the input stream
- // at the beginning of the desired element
- JarEntry archiveJarEntry = (uri != null)? parentArchive.jarFile.getJarEntry(uri.getSchemeSpecificPart()) : null;
+ JarEntry archiveJarEntry = (uri != null) ? parentArchive.jarFile.getJarEntry(uri.getSchemeSpecificPart()) : null;
if (archiveJarEntry == null) {
return null;
}
- jarIS = new JarInputStream(parentArchive.jarFile.getInputStream(archiveJarEntry));
- do {
- je = jarIS.getNextJarEntry();
- } while (je!=null && !je.getName().equals(entryName));
- if (je!=null) {
- return new BufferedInputStream(jarIS);
- } else {
- return null;
+ InputStream inputStream = parentArchive.jarFile.getInputStream(archiveJarEntry);
+ BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);
+
+ File tempFile = File.createTempFile("deploy_" + parentArchive.getName() + "_" + archiveJarEntry + "_", "");
+ tempFile.deleteOnExit();
+
+ BufferedOutputStream bufferedOutputStream = null;
+ try {
+ FileOutputStream fileOutputStream = new FileOutputStream(tempFile);
+ bufferedOutputStream = new BufferedOutputStream(fileOutputStream);
+
+ FileUtils.copy(bufferedInputStream, fileOutputStream, archiveJarEntry.getSize()); // could work, too
+ } finally {
+ if (bufferedOutputStream != null) {
+ bufferedOutputStream.close();
}
+ bufferedInputStream.close();
+ }
+
+
+ jarFile = new JarFile(tempFile);
+
+ ZipEntry ze = jarFile.getEntry(entryName);
+
+ if (ze != null) {
+ return new BufferedInputStream(jarFile.getInputStream(ze));
+ }
+
+// System.out.println(" New stream time=" + newStreamTime);
+ return null;
} else {
return null;
}