Thoughts on…

Java Middleware & Systems Management

Posts Tagged ‘java

GWT Compilation Performance

with 10 comments

A few weeks ago I noticed that the coregui module of RHQ, which is the next generation web interface written in GWT (SmartGWT to be precise), started to have its compilation slow down…noticeably. Not only did the overall time to compile the module increase, but during the build my box seemed to more or less locked up. Even mouse gestures were choppy. So…I decided to investigate.

I started by writing a script that would compile the coregui module for me. It was important to gauge how the different arguments to the maven gwt:compile goal ( would affect the system during the build. After reading through the documentation, I decided that ‘localWorkers’ and ‘extraJvmArgs’ were the two most important parameters to test different permutations for. Here is the script I came up with:


# INTERVAL - the minimum amount of memory to use for gwt:compile
# STEPS    - the number of increments of interval
#            e.g. 4 steps of 256 interval = tests of 256, 512, 768, 1024
#            e.g. 5 steps of 200 interval = tests of 200, 400, 600, 800, 1000
# RUNS     - number of iterations at each (INTERVAL,STEP)


declare -a AVERAGES
declare -a VARIANCES

function build

    vmArgs="-Xms${memory}M -Xmx${memory}M"

    mvnArgs="-Pdev -DskipTests -o"
    gwtArgs="$gwtWorkers $gwtJvmArgs"

    cmd="mvn $mvnArgs $gwtArgs gwt:clean install"
    echo "Executing $cmd"

    rm -rf "target/runs/$dirName"
    mkdir -p "target/runs/$dirName"
    declare -a raws
    for run in `seq $RUNS`

        before=$(date +%s)
        eval "$cmd" > $outputFile
        after=$(date +%s)

        elapsed_seconds=$(($after - $before))
        total_seconds=$(($total_seconds + $elapsed_seconds))

        echo "Run $run Took $elapsed_seconds seconds"
        echo "Execution details written to $outputFile"

    average_seconds=$(expr $total_seconds / $RUNS)

    let "index = (($workers - 1) * 4) + ($memory / $INTERVAL) - 1"

    for run in `seq $RUNS`
       let "sum_of_square_deltas += (${raw[$run]} - $average_seconds)**2"
    let "variance = $sum_of_square_deltas / ($RUNS - 1)"

    echo "Run Total: $total_seconds seconds"
    echo "Run Average: $average_seconds seconds"
    echo "Run Variance: $variance"

function run
    for workers in `seq 4`
        for offset in `seq $STEPS`
            memory=$(($INTERVAL * $offset))
            build $workers $memory

function print
    echo "Results"
    printf "          "
    for headerIndex in `seq 4`
        printf "% 12d" $headerIndex
    echo ""

    for workers in `seq 4`
        let "memory = $workers * $INTERVAL"
        printf "% 10d" $memory
        for offset in `seq $STEPS`
            let "index = ($workers - 1) * 4 + ($offset - 1)"
            printf "% 6d" ${AVERAGES[index]}
            printf "% s" "("
            printf "% 4d" ${VARIANCES[index]}
            printf "% s" ")"
        echo ""

if [[ $# -ne 1 ]]
    echo "Usage: $0 "
    exit 127

cd $rhq_repository_path/modules/enterprise/gui/coregui

echo "Performing timings..."
echo "Performance capture complete"

In short, this will build the coregui module over and over again, passing different parameters to it for memory (JVM arguments Xms/Xmx) and threads (GWT compiler argument ‘localWorkers’). And here are the results:

  1 2 3 4
256 229(6008) 124(4) 124(27) 124(12)
512 141(193) 111(76) 113(82) 114(25)
768 201(5154) 115(98) 123(57) 195(2317)
1024 200(2352) 125(83) 199(499) 270(298)

The columns refer to the number of ‘localWorkers’, effectively the number of concurrent compilation jobs (one for each browser/locale). The rows refer to the amount of Xms and Xmx given to each job. Each row represents statistics for the build being executed ten times using the corresponding number of localWorkers and memory parameters. The primary number represents the average time in seconds it took to compile the entire maven module. The parenthetical element represents the variance (the square of the standard deviation).

So what does this grid tell us? Actually, a lot:

  • End-to-end compilation time suffers when using a single localWorker (i.e., only one permutation of browser/locale is compiled at a time). Not only does it suffer, but it has a high variance, meaning that sometimes the build is fast, and sometimes it isn’t. This makes sense because the compilation processes are relatively independent and generally don’t contend for shared resources. This implies the compilation is a natural candidate for a concurrent/threaded solution, and thus forcing serialized semantics can only slow it down.
  • The variance for 2 or more localWorkers is generally low, except for the lower right-hand portion of the grid which starts to increase again. This also makes sense because, for the case of 4 threads and 1GB memory each, these concurrent jobs are exhausting the available system memory. This box only had 4GB ram, and so all of the physical memory was used, which starts to spill over to swap, which in turn makes the disk thrash (this was verified using ‘free’ and ‘iostat’). Regardless of what other activities were occurring on the box at the time as well as how the 4 threads are interleaved, it has an effect on the build because approximately ½ GB was being swapped to my 7200rpm disk during the compilation. (Granted, I could mitigate this variance by replacing my spindles with an SSD drive (which i may end up doing) but it is nonetheless important to be mindful of the amount of memory you’re giving to the entire compilation process (all workers threads) relative to the amount of available physical RAM.)
  • The 512MB row has values which are relative minimums with respect to the other timings for the same number of localWorkers given different memory parameters. This indicates that 512 is the sweet spot in terms of how much memory is needed to compile the modules. I would surmise that the cost of allocating more memory (256MB or 512MB for each of 2, 3, or 4 worker threads) and/or the cost of GC largely accounts for the slightly decreased performance with other memory settings. And then, as mentioned above, with lots of threads and very high memory, swapping to disk starts to dominate the performance bottleneck.
  • Aside from end-to-end running time, it’s also important on a developer system to be able to continue working while things are building in the background. It should be noted that each worker thread was pegging the core it ran on. For that reason, I’m going to avoid using 3 or 4 workers on my quad-core box because my entire system crawls due to all cores being pegged simultaneously…which is exacerbated in the case of high total memory with 3 or 4 workers causing swapping and disk thrashing alongside all my cores being pegged.


On my quad-core box with 4 gigs of RAM, the ideal combination is 2 localWorkers with 512MB. The end-to-end build will consistently (recall the low variance) complete just as quickly as any other combination (recall the low average running time), and it won’t peg my box because only half of my cores will be used for the GWT compilation, leaving the other half for other processes i have running…eclipse, thunderbird, chrome, etc.

So what happens now if I take and move this script over to a larger box? This new box is also a quad core, but has a faster processor, and more than twice the amount of RAM (9 gigs). From everything deduced thus far, can you make an intelligent guess as to what might happen?


I anticipated that with more than twice the amount of RAM, that the worse-case permutation (4 localWorkers with 1GB of Xms/Xmx each) would cause little to no swapping. And with a more powerful processor, that the average running times would come down. And that is exactly what happened:

  1 2 3 4
256 153(24) 98(0) 96(1) 95(1)
512 150(23) 96(1) 96(2) 95(1)
768 149(32) 97(1) 96(1) 95(0)
1024 149(24) 96(0) 96(0) 95(1)

With nearly non-existent variance, it’s easy to see the net effect of having plenty of physical resources. When the processes never have to go to swap, they can execute everything within physical RAM, and they have much more consistent performance profiles.

As you can see, adding more cores does not necessarily yield large decreases for the end-to-end compilation time. This can be explained because (at the time of this writing) RHQ is only compiling 6 different browser/locale permutations. As more localizations are added in the future, the number of permutations will naturally increase, which will make the effect of using more cores that much more dramatic (in terms of decreasing end-to-end compilation times).

Unfortunately, the faster processor on the larger box still got pegged when compiling coregui. So since 3 or 4 localWorkers doesn’t result in dramatically improved end-to-end compilation times, it’s still best to use 2 localWorkers on this larger box. I could, however, get away with dropping the memory requirements to 256MB for each worker, since the faster hardware seems to even out the the performance profile of workers given different memory parameters holding the localWorkers constant.

Lessons learned:

  • In any moderately sized GWT-based web application, the compilation may take a considerable portion of each core it executes on, perhaps even pegging them at 100% for a portion of the compile. Thus, if you want your development box to remain interactive and responsive during the compilation, make sure to set the localWorkers parameter to something less than the number of cores in your system.
  • Don’t throw oodles of memory at the GWT compilation process and expect to get a speedup. Too much Xms/Xmx will cause the worker threads to spill out of main memory and into swap, which can have a detrimental affect on the compilation process itself as well as other tasks on the box since I/O requests explode during that time. Modify the script presented here to work with your build, and obtain reasonable timings for your GWT-based web application at different memory levels. Then, only use as much RAM as is required to make the end-to-end time reasonable while avoiding swap.
  • Provide reasonably conservative default values for localWorkers and Xms/Xmx so as not to give a negative impression to your project’s contributors. Err on the side of a slow build rather than a build that crushes the machine it runs on.
  • Parameterize your builds so that each developer can easily override (in a local configuration file) his or her preferred values for localWorkers and Xms/Xmx. Provide a modified version of the script outlined here, to allow developers to determine the ideal parameter values on each of their development boxes.

Other tips / tricks to keep in mind:

  • Only compile your GWT application for the browser/locale permutation you’re currently developing and testing on. There’s no need to compile to other languages/locales if you’re only going to be testing one of them at a time. And there’s no need to compile the applications for all browser flavors if the majority of early testing is only going to be performed against one of them.

Recall at the start of this article I was investigating why my box seemed to be non-responsive during the compilation of coregui. Well it turns out that one developer inadvertently committed very high memory settings, which as you’ve now seen can cause large amounts of swapping when localWorkers is also high. The fix was to lower the default localWorkers to 2, and reduce the memory defaults to 512MB. As suggested above, both values are parameterized in the current RHQ build, and so developers can easily override either or both to their liking.


Written by josephmarques

July 30, 2010 at 1:42 am

Posted in gwt, java

Tagged with ,

Java IO Performance

with one comment

I thought I’d share a good article I just read on IO performance in a Java environment. It compares buffered versus non-buffered streams, both of which are synchronized for thread-safety, versus file channels which are unsynchronized. Going one step further, it also benchmarks the effect that buffer sizes (or “chunked” reads) have in each of those scenarios, as well as the consequence of memory-mapping those buffers.

The comparison to some of the predominant ways to read files in C towards the end adds some nice perspective.

Written by josephmarques

September 29, 2009 at 9:20 pm

Posted in java

Tagged with