Thoughts on…

Java Middleware & Systems Management

Debugging GWT on Linux

with 2 comments

After waiting with bated breath for what seemed like forever, the GWT Developer Plugin for Chrome on Linux has finally arrived!

Now I can get rid of the custom prebuilts I keep pulling from svn trunk. ; )

Written by josephmarques

December 8, 2010 at 4:44 pm

Posted in gwt

Tagged with

GWT Performance Monitoring

with 12 comments

Profiling a web application for performance bottlenecks requires a balance between focusing on the minutia and taking a step back and looking at the bigger picture. Since nearly all software projects deal with time constraints, a team must wisely choose where to spend its efforts tweaking an application to yield the best experience for its end users.

However, before diving into the code looking for optimizations, there are a few key things that one should understand about the performance profile of an application: Critical Path, Outliers, Variances, and Environment.

Critical Path

In a typical user session, which are the most frequented pages and/or business services? Determining these can help generate a prioritized list of services requiring optimization. For instance, decreasing the running time of a service that is called 50 times during a typical browsing experience is likely to yield significantly more perceived benefit than it would if the service were only called once.

Outliers

It may be the case that one service takes significantly more time to load than any other service on the page. If any of those other services were written in such a way where their execution is contingent on the data returned by the slow service, then the effects of the slow service become magnified. Paying attention to outliers will help one understand whether the perceived performance of one or more pages can be significantly improved by the optimization of just a single service.

Variances

Typically, a service will not execute in the exact same number of milliseconds each each time it is invoked, but it will execute within some tolerance, e.g. 1100-1300ms 98% of the time. Tracking these variances will help determine whether some service will cause random lags in your application, that may be otherwise difficult to track down because they occur sporadically. A service that will take 5 times as long to execute in the worst case may indicate the need to rework that implementation.

Environment

Often times programmers are tempted to introduce optimizations because of something they saw on their local box. However, these development environments often differ significantly from how an application will perform once it’s deployed to production or shipped to a customer. Application load is another important factor to consider; an application with one logged-in user will not necessarily have the same bottleneck(s) as it would if there were dozens, hundreds, or thousands of logged-in users. Thus, one must consider where an application might be deployed, and use a similar test environment to profile an application for performance issues.


With these principles in mind, I wrote an extension to the GWT client-side RPC mechanism to capture data that will assist in making better informed decisions about which services need optimization. The first thing we have to do is inject a custom ProxyGenerator into the GWT compilation process.

<generate-with class="...TrackingServiceInterfaceProxyGenerator">
    <when-type-assignable class="com.google.gwt.user.client.rpc.RemoteService"/>
</generate-with>    

The implementation for that custom generator is as follows:

package org.rhq.enterprise.gui.coregui.user.rebind.rpc;

import com.google.gwt.core.ext.typeinfo.JClassType;
import com.google.gwt.user.rebind.rpc.ProxyCreator;
import com.google.gwt.user.rebind.rpc.ServiceInterfaceProxyGenerator;

public class TrackingServiceInterfaceProxyGenerator extends ServiceInterfaceProxyGenerator {
    @Override
    protected ProxyCreator createProxyCreator(JClassType remoteService) {
        return new TrackingProxyCreator(remoteService);
    }
}

I sub-classed the GWT default ServiceInterfaceProxyGenerator implementation so that the only thing I had to do is override the one method that returns the ProxyCreator. In this case, it returns a custom implementation that will be responsible for reassigning the super class of all generated proxies:

package org.rhq.enterprise.gui.coregui.user.rebind.rpc;

import com.google.gwt.core.ext.typeinfo.JClassType;
import com.google.gwt.user.client.rpc.RemoteService;
import com.google.gwt.user.client.rpc.impl.RemoteServiceProxy;
import org.rhq.enterprise.gui.coregui.client.util.rpc.TrackingRemoteServiceProxy;

public class TrackingProxyCreator extends ProxyCreator {
    public TrackingProxyCreator(JClassType type) {
        super(type);
    }

    @Override
    protected Class getProxySupertype() {
        return TrackingRemoteServiceProxy.class;
    }
}

Note: make sure to keep your custom ProxyCreator and ProxyGenerator classes outside of the client source directory because these classes can not be cross-compiled to javascript. If you forget to do this and attempt to compile anyway, module validation will fail with a message like: “[ERROR] Line XX: No source code is available for type com.google.gwt.user.rebind.rpc.ProxyCreator; did you forget to inherit a required module?”

The TrackingRemoteServiceProxy is responsible for the actual RPC instrumentation. This class will wrap the original asynchronous callback method in one that performs statistics tracking.

package org.rhq.enterprise.gui.coregui.client.util.rpc;

import com.google.gwt.http.client.RequestCallback;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.google.gwt.user.client.rpc.impl.RemoteServiceProxy;
import com.google.gwt.user.client.rpc.impl.Serializer;
import com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.ResponseReader;

public class TrackingRemoteServiceProxy extends RemoteServiceProxy {
    ...
    @Override
    protected  RequestCallback doCreateRequestCallback(ResponseReader reader, 
        String methodName, int invocationCount, AsyncCallback callback) {
        RequestCallback original = 
            super.doCreateRequestCallback(reader, methodName, invocationCount, callback);
        TrackingRequestCallback trackingCallback = 
            new TrackingRequestCallback(methodName, original);
        RPCTracker.getInstance().register(trackingCallback);
        return trackingCallback;
    }

}

The implementation of the TrackingRequestCallback is fairly straightforward – all it has to do is notify the class responsible for statistics tracking each time a response is received for any service, whether that invocation was successful or returned an error:

package org.rhq.enterprise.gui.coregui.client.util.rpc;

import com.google.gwt.http.client.Request;
import com.google.gwt.http.client.RequestCallback;
import com.google.gwt.http.client.Response;

public class TrackingRequestCallback implements RequestCallback {

    private String name;
    private long start = System.currentTimeMillis();
    private RequestCallback callback;

    public TrackingRequestCallback(String name, RequestCallback callback) {
        this.name = name;
        this.callback = callback;
    }

    public void onError(Request request, Throwable exception) {
        RemoteServiceStatistics.record(getName(), getAge());
        callback.onError(request, exception);

    }

    public void onResponseReceived(Request request, Response response) {
        RemoteServiceStatistics.record(getName(), getAge());
        callback.onResponseReceived(request, response);
    }

    public String getName() {
        return name;
    }

    public long getAge() {
        return System.currentTimeMillis() - start;
    }
}

Finally we have the RemoteServiceStatistics singleton, which is responsible for keeping track of all invocations for all services across the application. When the statistics are queried, it can use this full audit trail of invocations to generate things like min/max/avg running times and standard deviations:

package org.rhq.enterprise.gui.coregui.client.util.rpc;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.rhq.enterprise.gui.coregui.client.util.rpc.RemoteServiceStatistics.Record.Summary;

public class RemoteServiceStatistics {

    public static class Record {
        private String serviceName;
        private String methodName;
        private List data = new ArrayList();

        private Record(String remoteService, long millis) {
            // remoteService format "{ServiceName}_Proxy.{MethodName}"
            this.serviceName = remoteService.substring(0, remoteService.indexOf("_Proxy"));
            this.methodName = remoteService.substring(remoteService.indexOf('.') + 1);
            this.data.add(millis);
        }

        private void record(long millis) {
            this.data.add(millis);
        }

        public Summary getSummary() {
            return new Summary(serviceName, methodName, data);
        }

        public static class Summary {
            public final String serviceName;
            public final String methodName;
            public final int count;
            public final long slowest;
            public final long average;
            public final long fastest;
            public final long stddev;

            private Summary(String serviceName, String methodName, List data) {
                this.serviceName = serviceName;
                this.methodName = methodName;
                this.count = data.size();

                if (!data.isEmpty()) {
                    int i = 0;
                    long total = 0, slow = 0, fast = 0;
                    for (long next : data) {
                        if (i++ == 0) {
                            total = slow = fast = next;
                        } else {
                            total += next;
                            if (next > slow) {
                                slow = next;
                            } else if (next < fast) {
                                fast = next;
                            }
                        }
                    }
                    slowest = slow;
                    fastest = fast;
                    double avg = (total) / (double) count;

                    double sumOfSquares = 0;
                    for (long next : data) {
                        sumOfSquares += Math.pow(next - avg, 2);
                    }
                    stddev = (long) Math.pow(sumOfSquares / count, 0.5);
                    average = (long) avg;
                } else {
                    slowest = average = fastest = stddev = 0;
                }
            }
        }

    }

    private static Map statistics = new HashMap();
    private RemoteServiceStatistics() {
        // static access only
    }

    public static void record(String remoteService, long millis) {
        Record record = statistics.get(remoteService);
        if (record == null) {
            record = new Record(remoteService, millis);
            statistics.put(remoteService, record);
        } else {
            record.record(millis);
        }
    }

    public static Summary get(String remoteService) {
        Record stat = statistics.get(remoteService);
        if (stat == null) {
            stat = new Record(remoteService, 0);
        }
        return stat.getSummary();
    }

    public static List getAll() {
        List stats = new ArrayList();

        for (String remoteService : statistics.keySet()) {
            stats.add(get(remoteService));
        }

        return stats;
    }
}

I wrote a quick test page in the user interface to expose the statistics data. Below is a cropped summary of what you’ll see:

By putting the statistics into a table with sortable columns, it makes analyzing the data much simpler. Below is a quick list of the indicators that can be derived from the data set above:

  • ResourceBossGWTService.getInventorySummaryForLoggedInUser – high average and low variance implies often/always slow, look for possible improvements such as query rewrites, database indexes, or pre-computes.
  • AlertGWTService.findAlertCountByCriteria/findAlertsByCriteria – high variance, look for either possible database table contention issues that would lead to periodic query slowness or possibly eagerly/conditionally loaded data not necessary to render the view
  • TagGWTService.findTagReportComositesByCriteria – since this is on the critical path (invoked every time a user is looking at a resource / group / bundle / etc), must ensure page rendering isn’t contingent on this data; in other words, it should be OK to lazy load this data onto the view

After you have the prioritized list of services that might require optimization, I would then open up Speed Tracer to see just how these network figures relate to pure client-side figures such as parsing, layout, style calculations, timer firings, events, etc.

It may be the case that most of the perceived end-user time is spent taking the RPC results and rendering the DOM, not in the network request itself. If that’s the case, then Speed Tracer will very likely help you to pinpoint the exact client-side issue(s) you’re dealing with. On the other hand, if you determine that the bottleneck is on the server, you can follow the steps in my other blog entry on Hibernate Performance Monitoring to help get to the bottom of those issues.

Keep in mind, this solution is only meant to be a starting off point. It’s doesn’t tell you precisely why the services are slow, but does give you an indication of which are the worst offenders and how frequently they are invoked. One thing I’ll probably be implementing in the near future is payload response tracking. Tracking this data and correlating it with the rest of the remote service statistics would help distinguish between network slowness caused by slow queries on the server versus too much data being returned in the response to the client. I encourage you to experiment with this solution, and expose other statistics that you feel are useful in getting a quick overview of where your GWT-based application performance bottlenecks may lie.

Written by josephmarques

November 29, 2010 at 5:18 pm

Posted in gwt

Tagged with

RHQ’s Powerful New Search Facility

leave a comment »

Search was developed to enable users to gain deeper insight, more quickly, into their enterprise by supporting a sophisticated method of querying system state. Some of the notable features that this powerful facility brings are:

  • Arbitrarily Complex Search Expressions
  • Search Suggestions / Auto-Completion / Search Assist
  • Results Matching / Highlighting
  • User Saved Searches

Take a look at the end-user docs (with screen shots) here. Or, if you want to interact with the Search facilities up close & personal, you can download the latest RHQ binaries here.

Considering the SearchBar was written with the primary purpose of being extensible, it will surely become a much more pervasive concept across RHQ in the future. So, let me know what you liked and/or what you would improve, especially if you can think of other features you’d like to see added. You can either post back here or subscribe to the RHQ developer mailing list.

Written by josephmarques

August 4, 2010 at 4:01 am

Posted in rhq

Tagged with

GWT Compilation Performance

with 3 comments

A few weeks ago I noticed that the coregui module of RHQ, which is the next generation web interface written in GWT (SmartGWT to be precise), started to have its compilation slow down…noticeably. Not only did the overall time to compile the module increase, but during the build my box seemed to more or less locked up. Even mouse gestures were choppy. So…I decided to investigate.

I started by writing a script that would compile the coregui module for me. It was important to gauge how the different arguments to the maven gwt:compile goal (http://mojo.codehaus.org/gwt-maven-plugin/compile-mojo.html) would affect the system during the build. After reading through the documentation, I decided that ‘localWorkers’ and ‘extraJvmArgs’ were the two most important parameters to test different permutations for. Here is the script I came up with:


#!/bin/sh

# INTERVAL - the minimum amount of memory to use for gwt:compile
# STEPS    - the number of increments of interval
#            e.g. 4 steps of 256 interval = tests of 256, 512, 768, 1024
#            e.g. 5 steps of 200 interval = tests of 200, 400, 600, 800, 1000
# RUNS     - number of iterations at each (INTERVAL,STEP)

INTERVAL=256
STEPS=4
RUNS=10

declare -a AVERAGES
declare -a VARIANCES

function build
{
    workers=$1
    memory=$2

    dirName="workers-${workers}-memory-${memory}"
    vmArgs="-Xms${memory}M -Xmx${memory}M"

    mvnArgs="-Pdev -DskipTests -o"
    gwtWorkers="-Dgwt-plugin.localWorkers=\"$workers\"" 
    gwtJvmArgs="-Dgwt-plugin.extraJvmArgs=\"$vmArgs\""
    gwtArgs="$gwtWorkers $gwtJvmArgs"

    cmd="mvn $mvnArgs $gwtArgs gwt:clean install"
    echo "Executing $cmd"

    total_seconds=0
    rm -rf "target/runs/$dirName"
    mkdir -p "target/runs/$dirName"
    declare -a raws
    for run in `seq $RUNS`
    do
        outputFile="target/runs/$dirName/output.${run}.log"

        before=$(date +%s)
        eval "$cmd" > $outputFile
        after=$(date +%s)

        elapsed_seconds=$(($after - $before))
        raw[$run]=$elapsed_seconds
        total_seconds=$(($total_seconds + $elapsed_seconds))

        echo "Run $run Took $elapsed_seconds seconds"
        echo "Execution details written to $outputFile"
    done 

    average_seconds=$(expr $total_seconds / $RUNS)

    let "index = (($workers - 1) * 4) + ($memory / $INTERVAL) - 1"
    AVERAGES[$index]=$average_seconds

    sum_of_square_deltas=0
    for run in `seq $RUNS`
    do
       let "sum_of_square_deltas += (${raw[$run]} - $average_seconds)**2"
    done
    let "variance = $sum_of_square_deltas / ($RUNS - 1)"
    VARIANCES[$index]=$variance

    echo "Run Total: $total_seconds seconds"
    echo "Run Average: $average_seconds seconds"
    echo "Run Variance: $variance"
}

function run
{
    for workers in `seq 4`
    do
        for offset in `seq $STEPS`
        do
            memory=$(($INTERVAL * $offset))
            build $workers $memory
        done
    done
}

function print
{
    echo "Results"
    printf "          "
    for headerIndex in `seq 4`
    do 
        printf "% 12d" $headerIndex
    done
    echo ""

    for workers in `seq 4`
    do
        let "memory = $workers * $INTERVAL"
        printf "% 10d" $memory
        for offset in `seq $STEPS`
        do
            let "index = ($workers - 1) * 4 + ($offset - 1)"
            printf "% 6d" ${AVERAGES[index]}
            printf "% s" "("
            printf "% 4d" ${VARIANCES[index]}
            printf "% s" ")"
        done
        echo ""
    done
}

if [[ $# -ne 1 ]]
then
    echo "Usage: $0 "
    exit 127
fi

rhq_repository_path=$1
cd $rhq_repository_path/modules/enterprise/gui/coregui

echo "Performing timings..."
run
print
echo "Performance capture complete"


In short, this will build the coregui module over and over again, passing different parameters to it for memory (JVM arguments Xms/Xmx) and threads (GWT compiler argument ‘localWorkers’). And here are the results:

  1 2 3 4
256 229(6008) 124(4) 124(27) 124(12)
512 141(193) 111(76) 113(82) 114(25)
768 201(5154) 115(98) 123(57) 195(2317)
1024 200(2352) 125(83) 199(499) 270(298)

The columns refer to the number of ‘localWorkers’, effectively the number of concurrent compilation jobs (one for each browser/locale). The rows refer to the amount of Xms and Xmx given to each job. Each row represents statistics for the build being executed ten times using the corresponding number of localWorkers and memory parameters. The primary number represents the average time in seconds it took to compile the entire maven module. The parenthetical element represents the variance (the square of the standard deviation).

So what does this grid tell us? Actually, a lot:

  • End-to-end compilation time suffers when using a single localWorker (i.e., only one permutation of browser/locale is compiled at a time). Not only does it suffer, but it has a high variance, meaning that sometimes the build is fast, and sometimes it isn’t. This makes sense because the compilation processes are relatively independent and generally don’t contend for shared resources. This implies the compilation is a natural candidate for a concurrent/threaded solution, and thus forcing serialized semantics can only slow it down.
  • The variance for 2 or more localWorkers is generally low, except for the lower right-hand portion of the grid which starts to increase again. This also makes sense because, for the case of 4 threads and 1GB memory each, these concurrent jobs are exhausting the available system memory. This box only had 4GB ram, and so all of the physical memory was used, which starts to spill over to swap, which in turn makes the disk thrash (this was verified using ‘free’ and ‘iostat’). Regardless of what other activities were occurring on the box at the time as well as how the 4 threads are interleaved, it has an effect on the build because approximately ½ GB was being swapped to my 7200rpm disk during the compilation. (Granted, I could mitigate this variance by replacing my spindles with an SSD drive (which i may end up doing) but it is nonetheless important to be mindful of the amount of memory you’re giving to the entire compilation process (all workers threads) relative to the amount of available physical RAM.)
  • The 512MB row has values which are relative minimums with respect to the other timings for the same number of localWorkers given different memory parameters. This indicates that 512 is the sweet spot in terms of how much memory is needed to compile the modules. I would surmise that the cost of allocating more memory (256MB or 512MB for each of 2, 3, or 4 worker threads) and/or the cost of GC largely accounts for the slightly decreased performance with other memory settings. And then, as mentioned above, with lots of threads and very high memory, swapping to disk starts to dominate the performance bottleneck.
  • Aside from end-to-end running time, it’s also important on a developer system to be able to continue working while things are building in the background. It should be noted that each worker thread was pegging the core it ran on. For that reason, I’m going to avoid using 3 or 4 workers on my quad-core box because my entire system crawls due to all cores being pegged simultaneously…which is exacerbated in the case of high total memory with 3 or 4 workers causing swapping and disk thrashing alongside all my cores being pegged.

Conclusions:

On my quad-core box with 4 gigs of RAM, the ideal combination is 2 localWorkers with 512MB. The end-to-end build will consistently (recall the low variance) complete just as quickly as any other combination (recall the low average running time), and it won’t peg my box because only half of my cores will be used for the GWT compilation, leaving the other half for other processes i have running…eclipse, thunderbird, chrome, etc.

So what happens now if I take and move this script over to a larger box? This new box is also a quad core, but has a faster processor, and more than twice the amount of RAM (9 gigs). From everything deduced thus far, can you make an intelligent guess as to what might happen?

….

I anticipated that with more than twice the amount of RAM, that the worse-case permutation (4 localWorkers with 1GB of Xms/Xmx each) would cause little to no swapping. And with a more powerful processor, that the average running times would come down. And that is exactly what happened:

  1 2 3 4
256 153(24) 98(0) 96(1) 95(1)
512 150(23) 96(1) 96(2) 95(1)
768 149(32) 97(1) 96(1) 95(0)
1024 149(24) 96(0) 96(0) 95(1)

With nearly non-existent variance, it’s easy to see the net effect of having plenty of physical resources. When the processes never have to go to swap, they can execute everything within physical RAM, and they have much more consistent performance profiles.

As you can see, adding more cores does not necessarily yield large decreases for the end-to-end compilation time. This can be explained because (at the time of this writing) RHQ is only compiling 6 different browser/locale permutations. As more localizations are added in the future, the number of permutations will naturally increase, which will make the effect of using more cores that much more dramatic (in terms of decreasing end-to-end compilation times).

Unfortunately, the faster processor on the larger box still got pegged when compiling coregui. So since 3 or 4 localWorkers doesn’t result in dramatically improved end-to-end compilation times, it’s still best to use 2 localWorkers on this larger box. I could, however, get away with dropping the memory requirements to 256MB for each worker, since the faster hardware seems to even out the the performance profile of workers given different memory parameters holding the localWorkers constant.

Lessons learned:

  • In any moderately sized GWT-based web application, the compilation may take a considerable portion of each core it executes on, perhaps even pegging them at 100% for a portion of the compile. Thus, if you want your development box to remain interactive and responsive during the compilation, make sure to set the localWorkers parameter to something less than the number of cores in your system.
  • Don’t throw oodles of memory at the GWT compilation process and expect to get a speedup. Too much Xms/Xmx will cause the worker threads to spill out of main memory and into swap, which can have a detrimental affect on the compilation process itself as well as other tasks on the box since I/O requests explode during that time. Modify the script presented here to work with your build, and obtain reasonable timings for your GWT-based web application at different memory levels. Then, only use as much RAM as is required to make the end-to-end time reasonable while avoiding swap.
  • Provide reasonably conservative default values for localWorkers and Xms/Xmx so as not to give a negative impression to your project’s contributors. Err on the side of a slow build rather than a build that crushes the machine it runs on.
  • Parameterize your builds so that each developer can easily override (in a local configuration file) his or her preferred values for localWorkers and Xms/Xmx. Provide a modified version of the script outlined here, to allow developers to determine the ideal parameter values on each of their development boxes.

Other tips / tricks to keep in mind:

  • Only compile your GWT application for the browser/locale permutation you’re currently developing and testing on. There’s no need to compile to other languages/locales if you’re only going to be testing one of them at a time. And there’s no need to compile the applications for all browser flavors if the majority of early testing is only going to be performed against one of them.

Recall at the start of this article I was investigating why my box seemed to be non-responsive during the compilation of coregui. Well it turns out that one developer inadvertently committed very high memory settings, which as you’ve now seen can cause large amounts of swapping when localWorkers is also high. The fix was to lower the default localWorkers to 2, and reduce the memory defaults to 512MB. As suggested above, both values are parameterized in the current RHQ build, and so developers can easily override either or both to their liking.

Written by josephmarques

July 30, 2010 at 1:42 am

Posted in gwt, java

Tagged with ,

Performance Monitoring using Hibernate

with 13 comments

Performance monitoring is, by its very nature, a slippery slope. Once you start looking for inefficiencies in code, it’s easy to get carried away wanting to optimize every last line for supreme efficiency. It’s thus important to balance the value of an optimization versus the time spent investigating the proper fix, the effort and difficulty required to implement it, and its importance relative to other things on the roadmap.

Every environment, every team, and every piece of software is different – so I won’t dare try to formulate any hard and fast rules for what is an appropriate or inappropriate amount of optimization. Suffice it to say, before any decisions can be made as to whether an optimization should be implemented, the nature of the problem must be understood. Below I will show how to use functionality built into Hibernate to help understand where the performance issues in your application lie as well as the nature of each.

—–

The first thing to do is to turn on Hibernate Statistics.

Once enabled, this service will start recording metrics on a per session factory basis. However, we’re going to want to collect “windows” of information from the statistics object. These windows will help us answer questions such as how many entities were loaded during a particular session bean method call or how many database roundtrips were required to render an entire page of the UI.

So let’s implement a kind of “stop watch” that will take a snapshot of some of the more interesting metrics offered through the statistics API. We’ll record the values of the metrics at two different points in time, and then calculate the delta metrics for hand-off to a logger or some analyzer.

public class HibernateStatisticsStopWatch {

    private org.hibernate.stat.Statistics stats;

    // some interesting metrics
    long queryExecutions;
    long transactions;
    long entityLoads;
    long connects;
    long time;

    public HibernateStatisticsStopWatch() {
        this.stats = // get statistics service from the MBeanServer
    }

    public void start() {
        queryExecutions = -stats.getQueryExecutionCount();
        transactions = -stats.getTransactionCount();
        entityLoads = -stats.getEntityLoadCount();
        connects = -stats.getConnectCount();
        time = -System.currentTimeMillis();
    }

    public void stop() {
        queryExecutions += stats.getQueryExecutionCount();
        transactions += stats.getTransactionCount();
        entityLoads += stats.getEntityLoadCount();
        connects += stats.getConnectCount();
        time += System.currentTimeMillis();
    }

    // getter methods for various delta metrics

    public String toString() {
        return "Stats" 
            + "[ queries=" + queryExecutions 
            + ", xactions=" + transactions 
            + ", loads=" + entityLoads 
            + ", connects=" + connects 
            + ", time=" + time + " ]";
    }
}

Since the idea is to instrument all different parts of the codebase with these stopwatches, things could get rather hairy and difficult to manage over time. That’s why it’s prudent to create a central authority to manage them. The centralization will make it easy to turn off/on the logging across the system, or even enable/disable the stopwatches themselves.

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.atomic.AtomicLong;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public class HibernatePerformanceMonitor {
    private static final Log log = LogFactory.getLog(HibernatePerformanceMonitor.class);

    private ConcurrentMap<Long, HibernateStatisticsStopWatch> watches;
    private AtomicLong idGenerator = new AtomicLong(0);
    private static HibernatePerformanceMonitor singleton = 
        new HibernatePerformanceMonitor();

    private HibernatePerformanceMonitor() {
        watches = new ConcurrentHashMap<Long, HibernateStatisticsStopWatch>();
    }

    public static HibernatePerformanceMonitor get() {
        return singleton;
    }

    public long start() {
        if (log.isDebugEnabled()) {
            HibernateStatisticsStopWatch watch = new HibernateStatisticsStopWatch();
            long id = idGenerator.incrementAndGet();
            watches.put(id, watch);
            watch.start();
            return id;
        }
        return 0;
    }

    public void stop(long id, String callingContext) {
        if (log.isDebugEnabled()) {
            HibernateStatisticsStopWatch watch = watches.remove(id);
            if (watch == null) {
                // could happen if debugging was enabled after start() was called
                return;
            }
            watch.stop();
            String caller = (callingContext == null ? "(unknown)" : callingContext);
            log.debug(watch.toString() + " for " + caller);
        }
    }
}

With these two items in place – the stopwatch class and the centralized, thread-safe manager for them – we’re now ready to instrument the application. If you want to monitor the performance of any particular block of code, all you have to do is:

long correlationId = HibernatePerformanceMonitor.get().start();
...
// some XYZ code block that appears not to perform well
...
HibernatePerformanceMonitor.get().stop(correlationId, "XYZ Code Block");

This, however, could get tedious if you had to instrument a few dozen places like this all over the code base. More problematic is the fact that you have to be nearly clairvoyant to know where issues might be without either seeing significant slowness in your application during normal QA, seeing performance degradation during scale testing, or having customer-filed support tickets.

Therefore, it would be nice to get an overall profile of the application first, before digging down into specific parts and wrapping smaller code blocks to see how they perform in isolation. Since RHQ‘s backend is written as a set of stateless session bean services, I can simply add an interceptor to get metrics for all method invocations across the entire system:

public class PerformanceMonitorInterceptor {
    @AroundInvoke
    public Object monitorHibernatePerformance(InvocationContext context) 
    throws Exception {
        String callingContext = 
                context.getMethod().getDeclaringClass().getSimpleName() + "." + 
                context.getMethod().getName();
        long monitorId = HibernatePerformanceMonitor.get().start();
        Object results = context.proceed();
        HibernatePerformanceMonitor.get().stop(monitorId, callingContext);
        return results;
    }
}

Make sure you add this interceptor declaration to your ejb-jar.xml file:

<ejb-jar>
   ...
   <assembly-descriptor>
      <interceptor-binding>
         <ejb-name>*</ejb-name>
         <interceptor-class>your.package.PerformanceMonitorInterceptor</interceptor-class>
      </interceptor-binding>
   </assembly-descriptor>
</ejb-jar>

Once this is all deployed, all you have to do is start clicking around your application and the performance information will start flowing to the server logs. Below is a snapshot of what is produced when visiting the inventory browser in RHQ:

01:04:03,165 DEBUG [HibernatePerformanceMonitor] 
    Stats[ queries=2, xactions=0, loads=0, connects=2, time=2 ]
    for ResourceGroupManagerBean.getResourceGroupCountByCategory 
01:04:03,166 DEBUG [HibernatePerformanceMonitor] 
    Stats[ queries=2, xactions=0, loads=0, connects=2, time=1 ]
    for GroupDefinitionManagerBean.getGroupDefinitionCount 
01:04:03,167 DEBUG [HibernatePerformanceMonitor] 
    Stats[ queries=13, xactions=0, loads=0, connects=13, time=12 ] 
    for ResourceBossBean.getInventorySummary 

Although these metrics will help you to determine if any individual SLSB method is a bottleneck, it is difficult to know how these methods are composed together to render larger functional chucks of your application. Since RHQ uses JSF/Facelets, we can capture the end-to-end cost of rendering a View with a ViewHandler:

public class PerformanceMonitorViewHandler extends FaceletViewHandler {
    ...
    @Override
    public void renderView(FacesContext context, UIViewRoot viewToRender) 
    throws IOException, FacesException {
        String callingContext = "URL " + viewToRender.getViewId();
        long monitorId = HibernatePerformanceMonitor.get().start();
        super.renderView(context, viewToRender);
        HibernatePerformanceMonitor.get().stop(monitorId, callingContext);
    }
}

Make sure you add this view handler declaration to your faces-config.xml file:

<faces-config ...>
   <application>
      <view-handler>your.package.PerformanceMonitorViewHandler</view-handler>
   </application>
</faces-config>

Now as you click around the application, the URL performance metrics appear end the end of each block of method-level metrics:

...
01:04:03,167 DEBUG [HibernatePerformanceMonitor] 
    Stats[ queries=13, xactions=0, loads=0, connects=13, time=12 ] 
    for ResourceBossBean.getInventorySummary 
...
2010-03-24 03:32:51,674 DEBUG [HibernatePerformanceMonitor] 
    Stats[ queries=19, xactions=27, loads=46, connects=64, time=622 ] 
    for URL /rhq/inventory/browseResources.xhtml 

OK, great. We have all of these performance metrics. Now what do they mean? How does one sift through the hundreds (if not thousands) of services calls to pinpoint application bottlenecks? Well…instead of paging through all of these raw metrics, it’s a lot easier to come up with a set of performance “equations”. Once you formulate these application-specific patterns to look for potential issues, you can trim down the log file by recording only the instances where certain thresholds were violated.

For instance, you might want to look for “slowness” in your application, which could be defined as some URL that takes longer than 3 seconds to load end-to-end, or an individual method that takes more than 500 milliseconds to return. This is easily obtained because the stopwatch records the amount of time between the calls to start() and stop(), and we’ve instrumented our codebase to wrap all SLSB method calls as well as all JSF views.

Another thing to look out for is the classic “N+1 selects” problem. Luckily, instances of this can be found by comparing the ratio of connects to query executions and entity loads. If you have many more connects than queries / loads (say 5 connects or more for each query / load) it’s likely the method in question suffers from the N+1 problem and is making many more trips to the database than should be necessary.

Fortunately, the “N+1 selects” problem is not the only issue that can be determined using derivative metrics. It’s also possible to find excessive transactions by comparing the ratio of transactions to query executions / entity loads. If you find many more transactions than queries / loads, chances are you have a method already inside a transaction that is calling into other methods that start their own transactions. This may in fact by the intended semantic, but not always. Regardless, if the inner methods are called in tight loops, the number of transactions required to carry out a single business workflow can explode. So obtaining this derivative metric can help determine whether your SLSB methods need to be refactored to carry out the business workflows in more appropriately sized transactional chunks. In general, though, it’s a good idea to be on the lookout for any single SLSB method that uses excessive transactions, regardless of the number of queries / loads.

All of these rules can be captured in just a few lines of code. Instead of logging every single method / URL in the HibernatePerformanceMonitor, can we instead narrow our focus by logging only those invocations that violate our rules:

public class HibernatePerformanceMonitor {
    ...
    public void stop(long id, String logPrefix) {
        if (log.isDebugEnabled()) {
            HibernateStatisticsStopWatch watch = watches.remove(id);
            if (watch == null) {
                return; // could happen if debugging was turned on and the start() call was already skipped
            }
            watch.stop();
            if (watch.getTransactions() > 10) {
                debug(watch, "(too many xactions?)", logPrefix);
            }
            if (watch.getQueryExecutions() != 0) {
                if ((watch.getConnects() / (double) watch.getQueryExecutions()) >= 5.0) {
                    // might indicate need for LEFT JOIN FETCHes
                    debug(watch, "(N+1 issue?) ", logPrefix); 
                }
                if ((watch.getTransactions() / (double) watch.getQueryExecutions()) >= 5.0) {
                    // might indicate excessive @REQUIRES_NEW
                    debug(watch, "(xaction nesting?) ", logPrefix); 
                }
            }
            if (watch.getTime() > 3000) {
                // might indicate inefficient query or table contention
                debug(watch, "(slowness?) ", logPrefix); 
            }
        }
    }

    private void debug(HibernateStatisticsStopWatch watch, String cause, String logPrefix) {
        String callingContext = " for " + (logPrefix == null ? "(unknown)" : logPrefix);
        log.debug(watch.toString() + cause + callingContext);
    }
}

It should be noted that these equations are only a starting off point. First, the thresholds they use must be tweaked to more appropriately match performance targets for your application…but even then they will not find every bottleneck in the system. Also, just because some of your business methods or UI pages violate these thresholds does not mean they are necessarily candidates for improvement. It may be the case that a particular page is known to consistently take 20 seconds to render, but it might be OK if it represents an executive report which aggregates lots of data across many subsystems. Another page might seem to use excessive transactions, but that might be OK if it is a system dashboard whose individual portlets are isolated from one another and request data independently.

So I leave it up to the reader to build on this solution and come up with increasingly sophisticated rules that operate off of application-specific thresholds and more accurately track potential issues within their environment.

Written by josephmarques

March 24, 2010 at 6:40 pm

Posted in hibernate

Tagged with

Git – Local Tracking Branches

leave a comment »

Whenever I clone a new copy of the RHQ repository from git, I’m placed in a local tracking branch of master. However, I rarely (if ever) want to work directly in master. Instead, I want to work in one of the numerous features-branches we have for the project.

I could create local tracking branches for the lines of development I’m interested in following, but that would be tedious considering there are nearly 3 dozen to choose from these days. Not only that, but I would need to repeat those commands each time I clone the repository (which I sometimes do so I can work in two different branches simultaneously as opposed to using ‘git stash’ and flipping back and forth between them), not to mention that I’d need to repeat those commands on all machines I work with (my home desktop, my work desktop, and my work laptop).

So I figured I would automate this by writing a quick shell script that would interrogate git, gather the lists of remote branches, and create local tracking branches automatically. I wanted to write the script such that it would set up local tracking branches for any newly created remote branches since the last time the script was run. Also, to keep things simple, I wanted the name of my local tracking branches to mirror the names of the remote branches. With those things in mind, I came up with:

#!/bin/sh

if [[ $# -ne 1 ]]
then
   echo "Usage: $0 "
   exit 127
fi

cd $1

# get all remotes except HEAD
remotes=`git branch -r | grep -v HEAD`
locals=`git branch -l`

locals_tracked=0;
total_remotes=0;
for remote_branch in $remotes 
do
   let "total_remotes++"

   # strip 'origin/' off of the remote_branch URL form
   local_branch=${remote_branch##origin/}
   
   already_had_local_branch=0
   for existing_local_branch in $locals 
   do
      if  [[ $existing_local_branch == $local_branch ]] 
      then
         already_had_local_branch=1
         break
      fi
   done

   if [[ $already_had_local_branch == 0 ]] 
   then
      git branch --track $local_branch $remote_branch
      let "locals_tracked++"
   else
      echo "Already had local branch '$local_branch'"
   fi
done

echo "Found $total_remotes remote branches"
echo "Created $locals_tracked local tracking branches"

And here is how I use it. First, I clone the RHQ repository:

[joseph@marques-redhat] git clone ssh://git.fedorahosted.org/git/rhq/rhq.git rhq-repo

Next I run my script, passing as an argument the name of the directory I just created to store my local copy of the repository:

[joseph@marques-redhat] ./git-setup-locals.sh rhq-repo

The script will show you output along the lines of:

Branch agentPlugin set up to track remote branch agentPlugin from origin.
...
Already had local branch 'master'
...

And finally print some summary information at the bottom:

Found 34 remote branches
Created 33 local tracking branches

So what is this good for? Well, it enables me to quickly bounce around multiple lines of development with ease (using ‘git checkout <branch>’) and without having to worry about whether or not I have a local tracking branch setup for that feature-branch yet.

—–

In the grand schema of things, this single issue isn’t a huge win. However, if you’re not mindful of the little things, enough of them can add up over time and start to have a real effect on productivity. So I tend to automate these things sooner rather than later so that I can forget about them and more easily focus on the code itself or the business problem at hand.

Written by josephmarques

March 17, 2010 at 3:46 pm

Posted in scm

Tagged with

Community Interaction in Open Source

with 12 comments

I noticed this morning, through the built-in update mechanism, that Firefox 3.6 was available for consumption (this was on my windows box). So I quickly stopped what I was doing, clicked on the button that was already staring me in the face, and about 15 seconds later (after restarting the browser) I was up and running with the latest and greatest.

I twisted my chair around to my F12 box, switched to my running Firefox instance, and opened the Help menu to force the update immediately…only to be reminded that the Fedora distribution has explicitly disabled the built-in update feature. Duh, of course — I’m supposed to be getting my updates from yum. OK, no biggie. I whip open a terminal and type ‘yum update firefox’ only to realize that my currently installed 3.5.8 is the latest and greatest available. After a few moments, my disappointment wore off and I realized this made perfect sense. If FF 3.6 was just recently made available through the built-in mechanism, I gotta give the channels a few days (at the very least) to catch up.

Out of curiosity, though, I wanted to see if someone from the community could give me a ballpark estimate on when FF 3.6 might arrive. I hopped on freenode and joined #fedora. Here’s what happened:

me: anyone know when the firefox 3.6 update will be available through yum?
guy1: when it is built

I paused for a few moments, wondering if he was being gruff (on purpose) or just trying to correct how I phrased my question to more closely match the proper vernacular I should have used. So I tried again…

me: ok, anyone know when the firefox 3.6 update will be built?
guy2: me: enough. No.

Well, I guess I knew how the first answer was supposed to be interpreted. Nonetheless, I felt inclined to finish on a somewhat positive note…

me: great, appreciate the pleasant community responses.  have a great day!

Looking back over that short interchange, I wonder whether it was really too hard just to say “sorry, we’re not sure when that will be prioritized into official channels.” Even something along the lines of “no one is working on that currently” would have been more descriptive and helpful. If they absolutely felt the need to save on keystrokes then “unsure” would have been sufficient.

I discussed what transpired with a friend of mine, and we agreed how strange it is that open source people (not all of them, but certainly too many of them IMO) feel they have a license to be jerks. The fact that I was tersely dismissed, however, was not the most frustrating part of this interaction. I wasn’t even all that disappointed to learn that Firefox 3.6 wasn’t available on F12 (yet). No – the most disheartening part of this interchange is that I…am…Red Hat.

Despite the fact that I’ll soon by entering my 5th year here working alongside Shadowman, my community experiences have largely been constrained to the Red Hat-sponsored project I work on — RHQ. Despite the daily stresses that I or any of my teammates have had to deal with internally (in our quest to put out a great release each and every time) we always try to be pleasant, helpful, and encouraging to our community through the public forums / IRC / mailing lists. As a result, I blindly assumed that others under the same umbrella would be acting similarly.

If this kind of interchange had happened with some other company, I could have easily written it off as “<those guys> are jerks.” But it stung when I realized that this is how some individuals in a Red Hat-sponsored community were acting. Whether or not guy1 or guy2 are actually employed by Red Hat is irrelevant here. If someone on my community forums, or in my community IRC, or on my community mailing list gave a response like that to a community member…actions would be taken to correct that behavior. Open source is supposed to be about the free exchange of ideas…the theory that we can compete with the big corporations because our collaboration model is superior. But if people see simple questions like “is anyone working on <foo> yet?” being dismissed, then I can’t imagine they would feel all that comfortable about asking their own questions. And that’s just shameful.

Written by josephmarques

March 15, 2010 at 11:32 am

Posted in opensource

Tagged with

Follow

Get every new post delivered to your Inbox.