Saturday, December 15, 2012

Android and locations - a sad storry

So I am e.g. browsing Google Maps and have a GPS fix. Now I switch to Ingress or c:geo from the recently used apps list. And what happens?

GPS-fix icon in the upper left corner turns off, app changes and I have to wait between 6 and 15sec to get a new fix even if I did not move a centimeter.

Worse is that the position during the switch-over is sometimes kilometers off, as the app uses the location from the network, which depends on the size of the network cell and the location of the next tower. If you are in a location based app, that switchover can even cause additional network traffic as e.g. maps needs to load a new tile or g:geo does the "nearby caches" search in a non-interesting area.

The technical background is the application lifecycle in Android and the recommendations to safe battery just don't mix.
When app one (e.g. maps) is brought to background, the GPS receiver is switched off, then the switchover happens and only after the onResume() has finished, it is turned on again. Unfortunately the Android system itself seems not to cache the GPS info for a little time and offers that to the next app as a potential current location, but just falls back on network location.

A solution could be for the Android system to cache the GPS location and add an uncertainty linear to the age of the fix. So while the location + uncertainty is smaller than the distance from the location obtained from network, the GPS location would be used. And then after the mentioned 6-15 sec, the real GPS data would take over.

Friday, November 09, 2012

JAX-RS 2 client in RHQ (plugins)

When my fellow Red Hatter Bill Burke recently wrote about the new features in JAX-RS 2.0, I started to consider using the new Client api to use in tests and plugins. Luckily Bill provided a 3.0-beta-1 release of RESTEasy that I started with.

To get this going, I started with the plugin case and wrote my few lines of code in the editor. All red, as the classes were not known yet. After many iterations in the editor and the standalone-pc, I ended up with this list of dependencies, which all need to be included in the final plugin jar:


Still I got errors that there is no provider for a media type of application/json available. It turned out, that the RHQ plugin container's classloader does not honor the META-INF/services/ entry (or any of the services there, so I had to explicitly enable them in code: client = ClientFactory.newClient();

Now the client works as intended. At there is quite a number of jars involved, I am thinking of providing an abstract REST plugin, that provides the jars and the basic connection functionality, so that other plugins can build on top just like we have done that with the jmx-plugin.

I was also trying to get this to work with Jersey, but failed somewhere in the maven world, as it failed for me after downloading half of the Glassfish distribution.

One interesting question to me is about the target group of this JAX-RS 2.0 client framework, as the Jersey and RESTEasy implementations seem to grow rather large (which does not matter too much for servers or on the desktop), so that Android apps are less likely to use that and will rather fall back to home grown solutions. With the huge number of mobile phones, this could perhaps the end also mean that nobody will touch the Client framework and falls back to those smaller homegrown solutions on mobile and desktop.

Thursday, October 25, 2012

REST/JAX-RS documentation generation

As I may have mentioned before :-) we have added a REST api to RHQ. And one thing we have clearly found out during development and from feedback by users is that the pure linking feature of REST may not be enough to work with a RESTful api and that some. Our Mark Little posted an article on InfoQ recently that touches the same subject.

Take this example:

Response updateSchedule(int scheduleId,
MetricSchedule in,
@Context HttpHeaders headers);

So this is a PUT request on some schedule with an id identifier where a certain MetricSchedule thing should be supplied. And then we get some Response back, which is just a generic
JAX-RS "placeholder".

Now there is a cool project called Swagger with a pretty amazing interactive UI to run REST requests in the browser against e.g. the pet store.

Still this has some limitations
  • You need to deploy wagger, which not all enterprises may want
  • The actual objects are not described as I've mentioned above

Metadata via annotations

The metadata in Swagger to define the semantics of the operations and the simple parameters is defined on the Java source as annotations on the restful methods like this:

@ApiOperation(value = "Update the schedule (enabled, interval) ",
responseClass = "MetricSchedule")
@ApiError(code = 404, reason = NO_SCHEDULE_FOR_ID)
Response updateSchedule(
@ApiParam("Id of the schedule to query") @PathParam("id") int scheduleId,
@ApiParam(value = "New schedule data", required = true) MetricSchedule in,
@Context HttpHeaders headers);

which is already better, as we now know what the operation is supposed to do, that it will return an
object of type MetricSchedule and for the two parameters that are passed we also get
a description of their semantics.

REST docs for RHQ

I was looking on how to document the stuff for some time and after finding the Swagger stuff it became clear to me that I do not need to re-invent the annotations, but should (re)use what is there. Unfortunately the annotations were deep in the swagger-core module.

So I started contributing to Swagger - first by splitting off the annotations into
their own maven module so that they do not have any dependency onto other modules, which makes
it much easier to re-use them in other projects like RHQ.

Still with the above approach the data objects like said MetricSchedule are not documented. In order to do that as well, I've now added a @ApiClass annotation to swagger-annotations, that allows to also document the data classes (a little bit like JavaDoc, but accessible from an annotation processor). So you can now do:

@ApiClass(value = "This is the Foo object",
description = "A Foo object is a place holder for this blog post")
public class Foo {
int id;
@ApiProperty("Return the ID of Foo")
public int getId() { return id; }

to describe the data classes.

The annotations themselves are defined in the following maven artifact:


which currently (as I write this) is only available from the snapshots repo

<!-- TODO temporary for the swagger annotations -->
<name>Sonatype OSS Snapshot repository</name>

The generator

Based on the annotations I have written an annotation processor that analyzes the files and creates an XML document which can then be transformed via XSLT into HTML or DocBookXML (which can then be transformed into HTML or PDF with the 'normal' Docbook tool chain.

Tool chain

You can find the source for the rest-docs-generator in the RHQ-helpers project on GitHhub.

The pom.xml file also shows how to use the generator to create the intermediate XML file.


You can see some examples of the generated output (for some slightly older version of the generator in the RHQ 4.5.1 release files on sourceforge, as well as the in the documentation for JBoss ON, where our docs team just took the input and fed it into their DocBook tools chain almost without and change.


If you are interested to see how the toolchain is used in RHQ, you can look at the pom.xml file from the server/jar module ( search for 'REST-API' ).

Project independent

One thing I want to emphasize here is that the generator with its toolchain is completely independent from RHQ and can be reused for other projects.

Thursday, October 11, 2012

Another nice JBoss OneDayTalk is over (updated)

Yesterday I was at the OneDayTalk conference organized by the Munich JBoss User Group.

And as in the last two years this was a nice conference with an interested audience and it was good
to meet with colleagues again.

This year I was talking about "RHQ and its interfaces" - and other than some other speakers, I enjoyed to just give my talk in German :)
My slides are available as PDF.

Update: A recording of my talk (which I held in German) is now available:

RHQ und seine Schnittstellen from Heiko W. Rupp on Vimeo.

Thanks for the Munich JBug to organize this nice conference and to allow me to present there.

Thursday, October 04, 2012

RHQ 4.5.1 released

I am pleased to announce the immediate availability of RHQ 4.5.1.
RHQ is a system for management and monitoring of resources like application servers
or databases and can be extended by writing plugins.

Actually I wanted to announce 4.5.0 a week ago, but a first user report showed an
error in the upgrade path from a previous version, so we have pulled that release
and fixed the bug along with another one and have now created a fresh 4.5.1 release.

Notable changes are:

  • Python support in the Command Line Interface (CLI)
  • Support for importing of scripts in the CLI
  • Enhancements in the JBossAS7 plugin
  • Enhancements in the REST API
  • Events tab allows to filter by date range
  • Postgres 9.2 is now supported as backend database
  • The Sigar library has been updated.

Special thanks goes to Elias Ross and Richard Hensman for their contributions.

Maven artifacts have been uploaded to the JBoss Nexus repo and should show up on maven central soon.

You can find the full release notes, that also contain a download link on the RHQ wiki.

This time we have included the full output from git shortlog for the commits of the release. Please tell us if this is useful for you.

Heiko on behalf of the RHQ team

Thursday, July 12, 2012

Introducing the No-op plugin for RHQ

Introducing the No-op plugin for RHQ

“A no-op plugin, does that make sense?” you will ask - and indeed it makes sense when you look at recent developments inside RHQ with the REST-api and also the support for jar-less plugins.

The no-op plugin is meant to support jar-less plugins that are written to define ResourceTypes on the server along with their metrics in order to be usable via the REST api, but where no resources are supposed to be found via the classic java-agent.

A concrete use case is the work our GSoC student Krzysztof is doing by bridging monitoring data from CIMmons deployed e.g. in RHEL servers to RHQ resources. Here we have the (possible) need to define resource types that will only be fed via the REST api.

How do I use this?

Basically you write a plugin descriptor (either in the classical way which you then put into a plugin jar) or with the name of *-rhq-plugin.xml and then deploy that into the server. The contents of this descriptor then refer to the No-op plugin - first in the <plugin> elements’s package declaration:

<plugin name="bar-test"
displayName="Jar less plugin test"
description="Just testing"

The next part is then to declare a dependency on the No-op plugin to be able to use the classes:

   <depends plugin="No-op" useClasses="true"/>

And finally to use the NoopComponent for the discovery and component classes of <server>s and <service>s:

    <server name="test"


A plugin defined like this then defines metrics etc. just like every other plugin. This is the plugin info:

Plugin info

And the metric definition templates:

Metric definition templates

Jar-less plugins for RHQ

JAR-less plugins for RHQ

Mazz has written a few times about the Custom-JMX plugin (funnily enough I did not find a newer post to link to), which is basically a plugin descriptor, that re-uses the classes from the base JMX plugin.

While this is very powerful, it has the small usability drawback of requiring to wrap the descriptor into a jar file to be recognized by the server (and the agents).

I have now implemented BZ 741682 which allows to deploy plugin descriptors only. For this to work

  • the file needs to be called *-rhq-plugin.xml (e.g. foo-rhq-plugin.xml)
  • the classes of the plugin need to already be present on the agent

The latter can be achieved by putting a depends directive into the descriptor:

<depends plugin="JMX" useClassed="true"/>

In the following example you see the bar-rhq-plugin.xml file picked up from the $SERVER/plugins directory
(the "drop-box") and placed into the plugins directory inside the app:

09:18:55,160 INFO  [PluginDeploymentScanner] Found plugin descriptor at 
    [/im/dev-container/plugins/bar-rhq-plugin.xml] and placed it at 

Next is the transformation of this plugin descriptor into a jar file - if this is successful, the now obsolete plugin descriptor is removed.

09:18:55,403 INFO  [AgentPluginScanner] Found a plugin-descriptor at 
    creating a jar from it to be deployed at the next scan
09:18:55,411 INFO  [AgentPluginScanner] Deleted the now obsolete plugin descriptor: true

At the next scan of the deployment scanner, the scanner will pick up this generated jar and deploy it like any other plugin:

09:19:26,470 INFO  [ProductPluginDeployer] Discovered agent plugin [bar-test]
09:19:26,474 INFO  [ProductPluginDeployer] Deploying [1] new or updated agent plugins: [bar-test]
09:19:26,656 INFO  [ResourceMetadataManagerBean] Persisting new ResourceType [bar-test:test(id=0)]...
09:19:29,344 INFO  [ProductPluginDeployer] Plugin metadata updates are complete for [1] plugins: [bar-test]

The last step would then be to run plugins update on the agent to get this new plugin from the server and to deploy it into the agent.

Wednesday, June 20, 2012

RHQPocket updated

I have worked over the last days on improving RHQPocket, an Android application that serves as an example client for RHQ. Most of the features of RHQPocket will work (best) with a current version of the RHQ server (from current master).

The following is a recording I made that shows the current state.

Update on RHQPocket from Heiko W. Rupp on Vimeo.

The video has been recorded with QuickTime player with the running emulator. I first tried this with the "classical" emulator, but this used 100% cpu (1 core) and with recording turned on, this was so slow that simple actions that are normally immediate, would take several seconds.

My next try was to film my real handset, but too much environmental light and reflexions made this a bad experience. After I heard that others had successfully installed the hardware accelerated GPU and the virtualized x86 image for the emulator I went that route and the CPU usage from the emulator went from 100% before to around 15-20% (home screen), so I went that route for recording.

You can find the full source code on GitHub - contributions and feedback are very welcome.
If you don't want to build from source, you can install the APK from here.

Thursday, June 14, 2012

RHQ REST api: Support for JSONP (updated)

I have just committed support for JSONP to the RHQ REST-api.

Update: In the first version a special media type was required. This is now removed, as jQuery seems to have issues sending this type. Also the default name for the callback parameter has been renamed to jsonp.

To use it you need to pass a parameter for the callback. Lets look at an example:

$ curl -i -u rhqadmin:rhqadmin \
http://localhost:7080/rest/1/alert?jsonp=foo \
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 01:00:00 CET
X-Powered-By: Servlet 2.4; JBoss-4.2.0.CR2
Content-Type: application/javascript
Transfer-Encoding: chunked
Date: Thu, 14 Jun 2012 09:25:09 GMT

foo([{"name":"test","resource":{"typeName":null, ……..])

The name of the callback parameter is jsonp and the name of the callback function to return is
foo. In the output of the server you see how the json-data is then wrapped inside foo().
The wrapping will only happen when both the right content-type is requested and the callback parameter is present.

The content type returned is then application/javascript.

Setting the name of the callback parameter

Then name of the callback parameter (jsonp in above example) can be set in web.xml :

<description>Name of the callback to use for JsonP /description>

Monday, May 28, 2012

RHQ REST api: added support for Group Definitions

I've just added some support for GroupDefinitions (aka "DynaGroups") to RHQ.

The following shows some examples:

List all definitions:

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definitions
"description":"just some random test",
"expression":["groupby resource.type.plugin","groupby"],
"expression":["resource.type.category = PLATFORM","groupby"],
Get a single definition by id:

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definition/10002
"expression":["resource.type.category = PLATFORM","groupby"],

You see in the above examples that the actual expression is encoded as a list with each line being an item in the list. The recalculation interval needs to be given in milliseconds.

Delete a definition (by id):

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definition/10031 -X DELETE
Create a new definition:

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definitions \
-HContent-Type:application/json -HAccept:application/json -X POST \
-d '{"name":"test1","description":"Hello","expression":["groupBy"]}'
HTTP/1.1 201 Created
Location: http://localhost:7080/rest/1/group/definition/10041

For creation a name is required. The location of the created group definition is returned in the header of the response.

And finally to update a definition:

curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definition/10041?recalculate=true \
-HContent-Type:application/json -HAccept:application/json -X PUT \
-d '{"name":"test4","description":"Hello","expression":["groupBy"]}'

By passing the query-param recalculate=true we can trigger a re-calculation of the groups defined by this group definition.

Wednesday, May 09, 2012

RHQ 4.4 released

I am proud to announce the immediate availability of RHQ 4.4

As before a lot of work has gone into this release:
  • Availability now knows a type of "disabled". This allows you to mark resources while maintenance or non-connected network interfaces so that they do not show up as down and also don't create false alerts. This also includes the possibility that plugins request to mark a resource as enabled or disabled
  • Faster availability reporting
  • Faster availability checking in the agent, that is also less bursty than in the past
  • Plugins can now request an availability check for a resource
  • Alerting has been improved: It is now possible to react on availability being in a certain state for some period of time (see the release notes for a long explanation
  • The JBoss AS 7 plugin has been massively improved
  • Denis Krusko has provided some initial Russian translations of the UI
  • Reports like Suspect Metrics, Recent Operations or Recent Drift can now be exported in CSV format to e.g. post-process them in Open/LibreOffice.

Of course we have fixed many small bugs too and made tweaks to the UI etc.

One very cool feature of this release is that the maven artifacts are already uploaded to the JBoss Nexus repository ;-)

Please check out the release notes, where you also find the download link. Also check out the RHQ 4.3 release notes, as 4.3 was more a slient release, which nevertheless brought enhancements in the UI (update of GWT/SmartGWT) and many bug fixes.

If you want to get a quick overview on the ways to interact with RHQ, have a look at this whitepaper (PDF)

As always: give use feedback, join us on Irc on #rhq and spread the word.

And last but not least I want to thank all our contributors for their valuable contributions.

Heiko on behalf of the RHQ team

Wednesday, April 25, 2012

Russian translations coming to RHQ

Denis Krusko, one of the two GSoC students working on RHQ has just committed a first batch of Russian translations to RHQ. Here are two screenshots to bring you in the right mood.

If you speak Russian and want to help Denis, I am sure he will appreciate it. Check out the translation project on github for details.

Russian traslation on the login screen


Russian translations in use administraion

Thank you Denis!

Monday, April 23, 2012

RHQ participates in this years Google Summer of Code™

Gsoc 2012 logo colorRHQ is happy to be able to participate in this years Google Summer of Code™

We have as part of the organization seen many very good proposals and the whole has seen many more so that even with the generosity of Google, we have obviously obtained a lot less slots than the number of good proposals received.

So I am extremely happy that for RHQ those two proposals have been accepted:

  • Replace old graphs by GWT ones - Denis Krusko: The main graphs in RHQ are still from the pre RHQ era and implemented as Servlets embedded in JSP and struts pages. The current UI is mostly written in GWT. While we were able to embed the old graphs, they still don't feel 'right'.

    Denis will investigate options for replacement and then implement new GWT-compatible graphing. Denis will also look at how the grahps can become more interactive by e.g. applying formulas on the data.

  • Implement an RHQ agent in Python - Krzysztof Kwaśniewski: The classical RHQ agent is written in Java and probably not best suited for every purpose. With the addition of the REST api, it now became easier to implement agents in other languages.

    Krzysiek will implement an agent in Python, that on one side talks REST with the RHQ server and on the other side interfaces with Matahari to take metrics from Fedora and RHEL hosts.

I am together with the other RHQ contributors very much looking forward to see Krzysztof and Denis in action.


*) GSoC Logo is taken from and has a CC-3.0-attrib-non-commercial-no-deratie license

Monday, February 13, 2012

IDEA: find by XPath

I like the idea of XPath since I've looked at it over 10 years ago. Today I came across a usage that is über cool. I had the need to find some pattern of data across a bunch of xml files, where a there is a list of maps where the list and maps are marked as read-only like in this example:

<c:list-property name="*" displayName="Installed extensions" readOnly="true" required="false">
<c:map-property name="*" displayName="Name" readOnly="true">
<c:simple-property name="module" displayName="Module name" readOnly="true"/>

Luckily IDEA offers "Find by XPath"

So I could type in my expression and have it search through the whole project.

Bildschirmfoto 2012 02 13 um 17 26 32

This looked good, but did not work. Then I remembered the little 'c:' and clicked on the "Edit context..." button, which allowed to set the correct namespace for 'c:':

Bildschirmfoto 2012 02 13 um 17 28 35

Now with the namespace correctly set, the search ran for a few seconds and found the occurrences all across my files. /me is a happy user.

Sunday, February 12, 2012

Small Fosdem 2012 review

On the weekend of Feb 4th, 2012 the European (and not only) Free and Open source community met in Brussels, Belgium for FOSDEM (Free and Open Source Developers Meeting).

The following video gives an impression on what FOSDEM is like - and I was only able to capture a tiny part of it.

(original size is 960x540)

I only arrived on Saturday at around noon via train. Stuttgart to Cologne was via ICE and from Cologne on I took the Thalys where I had the luck to get a a first class ticket for the price of a regular 2nd class one; the ticket included free WiFi and free breakfast with croissants, sandwiches and beverages.

I gave a talk on RHQ "Recent and future developments". In the presentation I talked about what we have achieved recently and where we may perhaps go in the future.

(original size is 960x540, here are slides as pdf)

As I had the camera with me, I also recorded a few other talks

I was lucky to meet with some of the JBoss colleagues, but was not able to make it to the virt sessions and to meet more of the various Red Hat people that were there. Fortunately some of the talks were recorded by the FOSDEM crew and will hopefully be online soon.

FOSDEM is for sure a conference to go to -- and not only because entrance is free and there is beer sold all over the place :)

Monday, January 23, 2012

JBoss, Fedora and more from Red Hat at FOSDEM 2012

A lot of folks from Red Hat are visiting FOSDEM 2012 this year.

I've listed the speakers here - I will myself talk on Saturday on RHQ.

In addition to the talks there are also stands:

You will be able to get a printed version of this list at the Fedora stand.







Welcome to the CrossDesktop Devroom

CrossDesktop  (H.1308)

Christophe Fergeau


Welcome to the Legal Issues DevRoom

Legal Issues  (AW1.125)

Richard Fontana


BoxGrinder : Grind your appliances easily  (K.3.201)

Marek Goldmann


Welcome to the Free Java DevRoom

Free Java  (K.4.401)

Mark Wielaard, Andrew Haley, Andrew Hughes


Drools Planner: Planning optimization by example  (K.3.201)

Geoffrey De Smet


libguestfs - tools for modifying virtual machine
disk images

Virtualization & Cloud  (Chavanne)

Richard Jones


Cloud high availability with pacemaker-cloud

Virtualization & Cloud  (Chavanne)

Pádraig Brady


Openshift  (K.3.201)

Grant Shipley


The Aeolus Project

Virtualization & Cloud  (Chavanne)

Francesco Vollero


JBoss AS7 : Building JBoss AS 7 for Fedora  (K.3.201)

Carlo De Wolf


Virtualization with KVM: bottom to top, past to future

Hypervisors  (Janson)

Paolo Bonzini


JBoss Forge / Arquillian: Two Missing Links in
Enterprise Java Development  (K.3.201)

Koen Aers


Open Clouds with Deltacloud API

Virtualization & Cloud  (Chavanne)

Michal Fojtik


DMTF CIMI and Apache Deltacloud

Virtualization & Cloud  (Chavanne)

Marios Andreou


Infinispan: where open source, Java and in-memory
data grids converge  (K.3.201)

Manik Surtani


Crossdesktop group picture

CrossDesktop  (H.1308)

Christophe Fergeau


The (possible) decline of the GPL, and what to do
about it

Legal Issues  (AW1.125)

Richard Fontana


RHQ: Recent and future developments in the RHQ
systems monitoring and management framework  (K.3.201)

Heiko Rupp


Panel on Application Stores

Legal Issues  (AW1.125)

Richard Fontana


Guvernor/JBPM : Managing workflows and business
rules with Guvnor and the jBPM designer  (K.3.201)

Geoffrey De Smet, Marco Rietveld


Thermostat: Taking over the Java tooling world with
Open Source Software

Free Java  (K.4.401)

Jon VanAlten, Omair Majid


PMD5: What can it do for you?

Lightning Talks (Ferrer)



Tracing, Debugging and Testing With Byteman

Free Java  (K.4.401)

Andrew Dinn







Spice "Open remote computing" introduction

Virtualization & Cloud  (Chavanne)

Hans de Goede


USB redirection over the network

Virtualization & Cloud  (Chavanne)

Hans de Goede


Systems Management with Matahari

Configuration & Systems Management  (K.3.601)

Zane Bitter


Boxes, use other systems with ease

CrossDesktop  (H.1308)

Zeeshan Ali (Khattak), Marc-André Lureau


Powerful tools for Linux C/C++ developers based on

Lightning Talks (Ferrer)

Andrew Overholt


Virtualization Management the oVirt way

Virtualization & Cloud  (Chavanne)

Itamar Heim


oVirt Engine Core: Internals and Infrastructure

Virtualization & Cloud  (Chavanne)

Omer Frenkel


Can I legally do that?

Free Java  (K.4.401)

Mark Wielaard


VDSM - The oVirt Node Management Agent

Virtualization & Cloud  (Chavanne)

Federico Simoncelli


OpenJDK on ARM: Quo vadis?

Free Java  (K.4.401)

Andrew Haley


Building app sandboxes on top of LXC and KVM with

Virtualization & Cloud  (Chavanne)

Daniel Berrange


IcedTea and IcedTea-Web

Free Java  (K.4.401)

Deepak Bhole


Discussion on the Future of Free Java

Free Java  (K.4.401)

Andrew Haley

Sunday, January 15, 2012

Pushing metrics / baselines via REST interface in RHQ

A few days ago I wrote about pulling raw metrics from RHQ via the REST interface.

It is also possible to push metrics as well as read and write baselines.

Pushing metrics

There are two ways to push numeric metric values into the server:

Single metric

curl -i -u rhqadmin:rhqadmin \
http://localhost:7080/rest/1/metric/data/10022/raw/1324656971 \
-X PUT \
-d @/tmp/foo \

With /tmp/foo containing:
{"timeStamp": 132465716178, "value": 123, "scheduleId":10023}

Note that you need to give the schedule id (10022) and the timestamp in the URL. The samples-project also contains an example in Python.

Multiple metrics

If you want to push multiple metrics at once you can use this call (of course it will also work for a single one):

curl -u rhqadmin:rhqadmin \
http://localhost:7080/rest/1/metric/data/raw \
-d @/tmp/foo \

with /tmp/foo containing:

{"timeStamp": 132465716078, "value": 123, "scheduleId":10022},
{"timeStamp": 132465716079, "value": 223, "scheduleId":10022}


Baselines are an interesting feature in the sense that they mark a band in which a dynamic metric usually oscillates. When the metric goes out of those bounds, you can get an alarm. The system usually computes those baselines by taking the minimum and maximum values from the existing dataset for the last n days (n is configurable). When a baseline is computed, they are also displayed on the large metric graphs. Here the baselines could also be manually set.

It is now (master/RHQ 4.3) possible to set the computation frequency to 0 to disable the automatic calculation by the server and push baseline data via the REST interface into the server. This allows to e.g. read metrics from a system like R, compute projections for future bands (e.g. via Holt-Winters) or x% quantiles of the existing data and write the results back as baseline data so that the normal alerting workflow can pick up that data.

The first call is to obtain the baseline for schedule with id 10013:

curl -u rhqadmin:rhqadmin \
http://localhost:7080/rest/1/metric/data/10013/baseline \

This call updates the schedule 10013 with new values:

curl -u rhqadmin:rhqadmin \
http://localhost:7080/rest/1/metric/data/10013/baseline \
-HContent-Type:application/json \
-HAccept:application/json \
-X PUT \
-d '{"max":2.58304512E9, \
"min":0.119968768E9, \
"mean":1.285011894659459E9, \

Saturday, January 14, 2012

Polyglot management of a secured AS7

JBossAS7 comes with a nice management interface that tools like the built-in admin-console or the console app are using. Next to the "more binary" DMR protocol, there is also a JSON interface available that can be accessed via http. Using this interface allows to manage AS7 from any programing language.
Luckily :-) this interface is secured by default and only accessible for a valid user via http digest authentication.

Set up admin user

The first step is to enable a user on the server to use for this management interface:

$ cd /jboss-as-7.1.0
$ bin/

Enter the details of the new user to add.
Realm (ManagementRealm) : <press enter>
Username : heiko
Password : <provide password>
Re-enter Password : <provide password again>
About to add user 'user' for realm 'ManagementRealm'
Is this correct yes/no? yes
Added user 'user' to file '/jboss-as-7.1.0/standalone/configuration/'

Now we have created a user 'heiko' with password 'okieh'.

Shell with curl

The following command with shut down the server via the management interface:

$ curl --digest -u heiko http://localhost:9990/management/ -d '{"operation":"shutdown" }' -HContent-Type:application/json

Note that for option '-u' only the username is given — curl will ask for the password. One important part here is that to mark the content-type of the data sent as "application/json". Curl will, if this header is not provided, send the request as 'application/x-www-form-urlencoded' which is disallowed by AS7.

If you run curl with option '-v' you can nicely see the re-negotiation to acquire the nonce from the server in order to compute the digest:

$ curl -v --digest -u heiko http://localhost:9990/management/ -d '{"operation":"shutdown" }' -HContent-Type:application/json
Enter host password for user 'heiko': <okieh>
* About to connect() to localhost port 9990 (#0)
* Trying connected
* Connected to localhost ( port 9990 (#0)
* Server auth using Digest with user 'heiko'
> POST /management/ HTTP/1.1
> User-Agent: curl/7.21.7 (x86_64-apple-darwin10.8.0) libcurl/7.21.7 OpenSSL/1.0.0e zlib/1.2.5 libidn/1.22
> Host: localhost:9990
> Accept: */*
> Content-Type:application/json
> Content-Length: 0
< HTTP/1.1 401 Unauthorized
< Content-length: 0
< Www-authenticate: Digest realm="ManagementRealm",nonce="6089edca29aa27b064aa1db42d9651eb"
< Date: Fri, 13 Jan 2012 09:54:39 GMT

First request has been sent and the server replied with a 401 unauthorized and the nonce to use. Now curl continues:

* Connection #0 to host localhost left intact
* Issue another request to this URL: 'http://localhost:9990/management/'
* Re-using existing connection! (#0) with host localhost
* Connected to localhost ( port 9990 (#0)
* Server auth using Digest with user 'heiko'
> POST /management/ HTTP/1.1
> Authorization: Digest username="heiko", realm="ManagementRealm", nonce="6089edca29aa27b064aa1db42d9651eb", uri="/management/", response="78b9546e7485b661121e34a72d2979f1"
> User-Agent: curl/7.21.7 (x86_64-apple-darwin10.8.0) libcurl/7.21.7 OpenSSL/1.0.0e zlib/1.2.5 libidn/1.22
> Host: localhost:9990
> Accept: */*
> Content-Type:application/json
> Content-Length: 25
< HTTP/1.1 200 OK
< Transfer-encoding: chunked
< Content-type: application/json
< Date: Fri, 13 Jan 2012 09:54:39 GMT
* Connection #0 to host localhost left intact
* Closing connection #0
{"outcome" : "success"}

So we've issued the equest and the server has shutdown. Using the same technique you can also e.g. query the port, the http-connector is listening on (which has the symbolic name of 'http'):

curl --digest -u heiko http://localhost:9990/management/ -HContent-Type:application/json  --data @-<< -EOF-

In this example you also see how to pass the address of the node to inspect and the name of the attribute to the server.

Beware that if you make a typo in the json-encoding (e.g. separating key and value by comma instead of colon), the server may just respond with a 401 without telling you what went wrong


It's a long time since I did serious perl coding, so that next example may not be the most elegant. The example shows again, how to retrieve the http port via a 'read-attribute' operation. As I don't want to obsfuscate the code even more, I did just provide the password in the script.


use JSON qw(objToJson jsonToObj from_json to_json decode_json);
use LWP;

$host = "localhost";
$port = "9990";

$realm = "ManagementRealm";
$user = "heiko";
$password = "okieh";

# Construct url of management api
$url = "http://$host:$port/management";

# the command to send to the server in JSON encoding
$json_data = '
# set up a User agent
my $browser = LWP::UserAgent->new();

# Create the request
my $req = HTTP::Request->new(POST => $url);
$req->content_type( 'application/json');

# send the request to the server
$res = $browser->request($req);

# If we don't get a 200 back, we finish here
die "No success ", $res->status_line unless $res->is_success;

# Get the content from the response
my $seite_code = $res->content;
print "Received : $seite_code \n";

# decode the json retieved
my $json = JSON->new->utf8;
$obj = $json->decode($seite_code);
%pairs = %{$obj}; # json->decode returns a hash ref
# get the result
$httpPort = $pairs{"result"};
print "Http port is $httpPort \n";

The basic part to handle the digest authentication is $browser->credentials("$host:$port",$realm,$user,$password);, which makes LWP transparently handle the creation of the digest and re-sending of the request.


Unlike perl, which I was using a lot in the past, I am not yet familiar with Ruby, so there may be a much better solution -- please provide some feedback. Especially I have not found a good way to automatically handle the digest authentication, so this is done explicitly


require 'json'
require 'net/http'
require 'net/http/digest_auth'

url = URI.parse('http://localhost:9990/management/')
url.user = 'heiko'
url.password = 'okieh'

# data to send to retrieve the server name
data = { "operation" => "read-attribute",
"address" => [],
"name" => "name"}

h =, url.port

# send first request to get nonce
req = url.request_uri
res = h.request req

So far we have sent a first request to obtain the 'nonce' from the server, so we can compute the digest in the 2n step.

# compute the digest
digest_auth =
auth = digest_auth.auth_header url, res['www-authenticate'], 'POST'

# Now send the real request with the nonce
body = JSON.generate(data)

puts "Sending " + body
req = url.request_uri
req.add_field 'Content-Type', 'application/json'
req.add_field 'Authorization', auth
req.body = body

res = h.request req

print "Result " + res.body

# parse the JSON and obtain the 'result' object
data = JSON.parse(res.body)
server_name = data["result"]
print "Server name is " + server_name


I want to thank Darran Lofthouse for helping me to get going with why apparently correct requests fail with a 403 (because of the wrong content type).

Monday, January 09, 2012

Some graphing fun with D3.js

Now that the RHQ REST api can expose raw numerical metrics for the last 7 days, it is possible to create additional graphs for numerics. As before I have used D3.js to create the following graphs. The source is now in RHQ git master and will also be in RHQ 4.3.

The first graph shows the last 7 days of metrics in one go:
Last 7 days of metrics

This one shows the last 7 days where each span of 24h is represented by a colored line, which allows to compare the values directly - the darker the color, the older the data:
Bildschirmfoto 2012 01 09 um 11 06 05

This needs some more work, as the x-axis labeling does not yet take the exact start time into account.
Also time spans that have no values (e.g. because agent is down) should not show a straight line, but no line at all for that period. There should perhaps be a legend about the colors as well.

I am still very much on the learning side of JavaScript and D3.js and the above is far from the beautiful examples Mike Bostock creates, but I think one can already see the potential power here.

If anyone is interested to add tooltips for the values or fix the label for the second graph, please ping me.

Monday, January 02, 2012

Analyze your RHQ metrics with R

R plot of aggregate metrics

Plot of metric aggregates in R

You probably have seen that with RHQ 4.2 you can export the aggregate metrics for the last n (default 8) hours via the REST api by calling:


where <scheduleId> must be a valid numerical schedule.

The statistical tool R allows via "RCurl" to load data from remote URLs e.g. like this:

json_file <- getURL("http://localhost:7080/rest/1/metric/data/10013", httpheader=c(Accept = "application/json"),userpwd="rhqadmin:rhqadmin")

RCurl allows you to specify username and password unlike the simple json_file <- "http://..." calls. The next step is then to transform the received data with the help of the "rjson" library into R data structures:

## convert json to list of vectors
json_data <- fromJSON(paste(json_file, collapse=""))

## convert the embedded data points into a data frame
df <- data.frame(,json_data$dataPoints))

which you can then access and e.g. plot:

## plot the data
plot(df$timeStamp,df$value,xlab="time",ylab="Free memory (bytes)",xaxt='n',type='l')

You can find the full example as plot_metrics.r in the RHQ samples project on GitHub.

Some like it raw...

RHQ master (this will make it into RHQ 4.3) is now also able to export raw metrics via REST api


Like above you provide the schedule id and optionally a startTime and endTime or an endTime and a duration (in seconds). If nothing is provided, the data for the last 8h is exported. The following screenshot shows an example plot:

R plot of RHQ metrics

The metrics are plotted in black, the average in blue, the 5% and 95% quantils in orange and green and with the help of the TTR library, the 50 samples moving average is plotted in red. The sample code (with slightly different parameters) is also available online as plot_raw.r.

If you find other cool usages like e.g. Holt-Winters prediction on the data, please consider submitting them as example to the samples project.