Saturday, December 21, 2013

Money for nothing and memory for free: Java 7u40

I recently started playing with tools like MAT and also reading up about memory usage, performance (tuning) etc.

One of the interesting blog posts I came by was a comparison of JVM versions, that mentions that in 7u40, the default on how the JVM allocates the backing memory for ArrayList and HashMap has changed.

When you do

List<Foo> bla = new ArrayList<Foo>();

the VM will allocate memory for the base object and in vm versions prior to 7u40 also 10* 4bytes for the references to Foo objects, which (with alignment and so on) accounts to 80 bytes per empty ArrayList.

In 7u40, the array for the references is no longer eagerly allocated so that an empty ArrayList now only occupies the base 24 bytes, which account for 56bytes saving.

You may say so what, that are only 56 bytes, but remember that memory not allocated does not fill the heap, does not need to be garbage collected and also does not require memory bandwidth for the initial nulling out.

And so often an empty list/map does not come alone as in a case like this (output from MAT):

Screenshot 2013 11 15 17 51 25

With 225k empty ArrayLists, 56 bytes matter: 225k*56 bytes are 12 MB that you would save just by switching from a JVM pre 7u40 to 7u40 or later without a code change (of course not allocating those lists at all would save even more).

The situation for HashMap is similar: before 7u40 an empty one uses 136 bytes while in 7u40+ is uses 48, a saving of 88 bytes per empty HashMap or with the 235k empty HashMaps of the above example a saving of 20 MB.

Another (older) change is that in 7u06 the minimum size of String objects also has been reduced by 8 bytes, which matters a lot if you have millions of Strings in your VM.

Of course (as I've mentioned already) if you have access to the source code and can prevent the allocation of those empty objects altogether you would save even more memory.

Saturday, November 02, 2013

Back from OneDayTalk

[ I should have already written this a bit earlier, but I had some trouble with my left knee and had to go through some surgery (which went well). ]

As in the previous years I have been invited to give a talk at the OneDayConference in Munich. This year it was in a new location in a suburb of Munich called Germering. Getting there was easy for me, as there is a S-Bahn stop almost in front of the conference location.

The new location featured more and larger rooms and especially an area to sit down between talks or during lunch time. As in the last years the conference featured three parallel tracks.

As I said before I like that conference as everything is like a big family event with the organizers and also the presenters which featured many JBoss Colleagues; while I wrote that Ray would be there, Andrew Rubinger replaced him. The only talk that I really attended was the Wildfly one from Harald Pehl, which was full house. In the remaining part of the conference I talked to various attendees and colleagues from Jan Wildeboer to Gavin King and Heiko Braun. Heiko gave me an introduction about his (and Harald's) work to generate UIs from descriptors (which they use in the Wildfly console), which looks very interesting and where I think we could use some of that inside of RHQ to create "wizards" for several types of target resources.

In my talk, which was in the last slot, I had around 30 attendees (which was around 1/3 of the attendees still present). To my surprise I found out that the large majority did not yet know or use RHQ, so I had to switch from my original agenda and gave a brief introduction into RHQ first. Next I talked about the recent changes in RHQ and tried to gather feedback for future use case, but that was of course harder with attendees not knowing too much about RHQ. So much for "know your audience".

How do others try to find out their audience when the only thing they know is "This conference is all about JBoss projects" ?

You can find my slides in AsciiDoc format on GitHub that you can render via AsciiDoctor to html presentation.

Friday, November 01, 2013

Beware of the empty tag

I started playing with AngularJS lately and made some progress with RHQ as backend data source (via the REST api). Now I wanted to write a custom directive and use it like this:


<rhq-resource-picker/>
<div id="graph" style="padding: 10px"></div>

Now the very odd thing (for me) was that in the DOM this looked like this:


<rhq-resource-picker>
<div id="graph" style="padding: 10px"></div>
</rhq-resource-picker>

My custom directive wrapped around the <div> tag.

To fix this I had to turn my tag in a series of opening and closing tags instead of the empty one:


<rhq-resource-picker></rhq-resource-picker>
<div id="graph" style="padding: 10px"></div>

And in fact it turns out that a few more tags like the popular <div> tag show the same property of not allowing to write an empty tag, but requiring a tag pair.

Thursday, October 10, 2013

OneDayTalk conference in Munich

I will as in the previous years be at this years JBoss OneDayTalk conference in Munich and talk about recent and future developments in RHQ.

OneDayTalk is a pretty nice little conference organized by the JBoss User Group Munich, that offers three tracks with 6 sessions each, where I usually have the problem that I can't divide myself into three to visit them all. And for 99 Euro you also get some good food and many opportunities to meet myself and other JBossians like Eric Schabell, Gavin King, Emanuel Muckenhuber, Heiko Braun or Ray Ploski. The conference web site has the full listing of speakers as well as the scheduled program.

Friday, September 13, 2013

More clever builds and tests ?

Hey,

when you have already tried to build RHQ and run all the tests you probably have seen that this may take a huge amount of time, which is fine for the first build, but later when you e.g. change a typo in the as4-plugin, you don’t want GWT being compiled again.
Of course as a human developer it is relatively easy to just go into the right folder and only run the build there.

Now on build automation like Jenkins this is less easy, which is why I am writing
this post.

What I want to have is similar to

  • if a class in GWT has changed, only re-build GWT

  • if a class in a plugin has changed, only rebuild and test that plugin
    (and perhaps dependent plugins like hibernate on top of jmx)

  • if a class in enterprise changes, only build and test enterprise

  • if a class in server/jar changes, only rebuild that and run server-itests

  • if a class in core changes, rebuild everything (there may be more fine grained rules as e.g. a change in core/plugin-container does not require compiling GWT again)

This is probably a bit abbreviated, but you get the idea.

What I can imagine is that we compile the whole project (actually we may even do incremental compiles to get build times further down and may also only go into a specific module (and deps) and just build those).
And then instead of running mvn install we run mvn install -DskipTests
and afterwards analyze what has changed, throw the above rules
at it and only run the tests in the respective module(s).

We could perhaps have a little DSL like (which would live in the root of the project tree and be in git)

    rules: {
rule1: {
if :modules/enterprise/,
then: {
compile: modules/enterprise,
skip: modules/enterprise/gui/coregui,
test: ["modules/enterprise"]
}
},
rule2: {
if: modules/plugins/as7-plugin,
then: {
compile:[ modules/plugins/as7-plugin,
modules/integration-tests/as7-plugin],
test: [ modules/plugins/as7-plugin,
modules/integration-tests/as7-plugin]
}
}

And have them evaluated in a clever way by some maven build helper
that parses that file and also the list of changes since the last build to
figure out what needs testing.

We can still run a full build with everything to make sure that we don’t loose coverage
by those abbreviations

There may be build systems like gradle that have this capability built in; I think for maven
this requires some additional tooling

QUESTIONS:

  • Are there any "canned" solutions available?

  • Has anyone already done something like "partial tests" (and how)?

  • Anyone knows of maven plugins that can help here?

  • Anyone interested in getting that going? This is for sure not only interesting for RHQ but for a lot of projects

Having such an infrastructure will also help us in the future to better integrate
external patches, as faster builds / tests can allow to automatically test those and
"pre-approve" them.




Wednesday, September 11, 2013

RHQ 4.9 released

It is a pleasure for me to announce on behalf of the RHQ development team the immediate availability of RHQ 4.9

Features

Some of the new features since RHQ 4.8 are:

Be sure to read the release notes and the installation documents.

Note

For security reasons we have made changes to the installation in the sense that there is no more default password for the rhqadmin super user. Also the default bind address of 0.0.0.0 has been removed. You need to set them before starting the installation.

If you are upgrading from RHQ 4.8 you need to run a script to remove the native components from Cassandrabefore the upgrade. Otherwise RHQ 4.9 will fail to start.

Thanks

Special thanks goes to

  • Elias Ross

  • Jérémie Lagarde

  • Michael Burman

for their code contributions for this release and to Stian Lund for his repeated testing of the new graphs implementation.

Downloads

As usual you can download the release from the RHQ-project downloads on SourceForge




Friday, July 26, 2013

Custom Deserializer in Jackson and validation

tl;dr: it is important to add input validation to custom json deserializers in Jackson.

In RHQ we make use of Json parsing in a few places - be it directly in the as7/Wildfly plugin, be it in the REST-api indirectly via RESTEasy 2.3.5, that already does the heavy lifting.

Now we have a bean Link that looks like

public class Link {
String rel;
String href;
}

The standard way for serializing this is

{ "rel":"edit", "href":"http://acme.org" }

As we need a different format I have written a custom serializer and attached it on the class.

@JsonSerialize(using = LinkSerializer.class)
@JsonDeserialize(using = LinkDeserializer.class)
@Produces({"application/json","application/xml"})
public class Link {

private String rel;
private String href;

This custom format looks like:

{
"edit": {
"href": "http://acme.org"
}
}

As a client can also send links, some custom deserialization needs to happen. A first cut of the deserializer looked like this and worked well:

public class LinkDeserializer extends JsonDeserializer<Link>{

@Override
public Link deserialize(JsonParser jp,
DeserializationContext ctxt) throws IOException
{
String tmp = jp.getText(); // {
jp.nextToken(); // skip over { to the rel
String rel = jp.getText();
jp.nextToken(); // skip over {
[…]

Link link = new Link(rel,href);

return link;
}

Now what happened the other day was that in some tests I was sending data and our server blew up horribly. Memory usage grew, the garbage collector took huge amounts of cpu time and the call eventually terminated with an OutOfMemoryException.

After some investigation I found that the client did not send the Link object in our special format, but in the original format that I showed first. Further investigation showed that in fact the LinkDeserializer was consuming the tokens from the stream as seen above and then also swallowing subsequent tokens from the input. So when it returned, the whole parser was in a bad shape and was then trying to copy large arrays around until we saw the OOME.

After I got this, I changed the implementation to add validation and to bail out early on invalid input, so that the parser won’t get into bad shape on invalid input:

    public Link deserialize(JsonParser jp,
DeserializationContext ctxt) throws IOException
{

String tmp = jp.getText(); // {
validate(jp, tmp,"{");
jp.nextToken(); // skip over { to the rel
String rel = jp.getText();
validateText(jp, rel);
jp.nextToken(); // skip over {
tmp = jp.getText();
validate(jp, tmp,"{");
[…]

Those validate*() then simply compare the token with the passed expected value and throw an Exception on unexpected input:

    private void validate(JsonParser jsonParser, String input,
String expected) throws JsonProcessingException {
if (!input.equals(expected)) {
throw new JsonParseException("Unexpected token: " + input,
jsonParser.getTokenLocation());
}
}

The validation can perhaps be improved more, but you get the idea.




Monday, July 15, 2013

JavaFX UI for the RHQ plugin generator (updated)

In RHQ we have a tool to generate initial plugin skeletons, the plugin generator. This is a standalone tool, that you can use to get going with plugin development.
This is for example described in the "How to write a plugin" article.

Now my colleague Simeon always wanted a graphical frontend (he is very much in favor of graphical frontends, while I am one of those old Unix-neckbeards :). Inspired from this years Java Forum Stuttgart I sat down on the weekend and played a bit with JavaFX. The result is a first cut of UI frontend for the generator:

UI for the plugin generator

On top you see a message bar, that shows field validation issues, the middle is the part where you enter the settings and at the bottom you get a short description for each property.

I've pushed a first cut, that is not yet complete, but you will get the idea. Help is always appreciated.
Code is in RHQ git under modules/helpers/pluginGen.

[update]

I continued working a bit on the generator and have added a possibility to give a directory that contains classes that have been annotated with the annotations from the org.rhq.helpers:rhq-pluginAnnotations module.

Screenshot scan-for-annotations

A class could look like this


public class FooBean {

@Metric(description = "How often was this bean invoked",
displayType = DisplayType.SUMMARY,
measurementType = MeasurementType.DYNAMIC,
units = Units.SECONDS)
int invocationCount;

@Metric(description = "Just a foo", dataType = DataType.TRAIT)
String lastCommand;

@Operation(description = "Increase the invocation count")
public int increaseCounter() {
invocationCount++;
return invocationCount;
}

@Operation(description = "Decrease the counter")
public void decreaseCounter(@Parameter(description
= "How much to decrease?", name = "by") int by) {
invocationCount -= by;
}

And the generator would then create the respective <metric> and <operation> elements in the plugin descriptor - in this case you don't need to select the two "Has Metrics/Operations" flags above.

Now this work is still not finished. And in fact it would be good to find a common set of annotations for a more broader scope of projects.

Wednesday, July 03, 2013

Using Asciidoc with MarsEdit - first cut

With Asciidoc becoming more popular due to Asciidoctor I started authoring documents in Asciidoc and thought it would be a good idea to use that for my blogging as well (in the long run I may set up my blog in Awestruct, but for now Blogger has to do.

Usually I am using
MarsEdit to write my posts, as it is just convenient for me. MarsEdit now allows to write custom text filters since the recently released version 3.6. There are a few such filters provided like for Markdown or Textile, so that you can write blog posts inside MarsEdit in those markup languages and still be able to post to e.g. Blogger, which requires html.

When I saw the filters announced, I thought, that should be possible with Asciidoc as well.

So basically to achieve this, you need to create a directory

~/Library/Application Support/MarsEdit/TextFilters/Asciidoc_0.0.1

and then create a file Asciidoc.rb with the following content:

#!/usr/bin/ruby

require 'rubygems'
require 'asciidoctor'

input = $stdin.read

puts Asciidoctor.render(input)

Make that file executable and install the AsciiDoctor gem.

Then (re-start) MarsEdit and select Asciidoc as Preview Text filter in the connection settings of the blogger account ( see Mars Edit per Blog Settings ). Then click on the "Posting" tab and click on "Apply preview filter before posting".

Unfortunately I have not yet found out how to include (remote) images

NOTE

Not all AsciiDoc markups make sense, as AsciiDoctor usually uses some CSS, that may not be present on the target blog system. It may be not too hard though to create a special "blogger" backend, that uses
blogger css for that or to tell blogger to accept the AsciiDoctor css files

This screenshot shows MarsEdit with source and preview.

I hope this little post makes sense and encourages people to experiment with it in order to make AsciiDoc a real alternative to




Saturday, June 29, 2013

New Android tooling with gradle: Order matters

I am trying to convert RHQPocket over to use the new Android build system with Gradle.
The documentation is comprehensive, but as always does not exactly match what I have in front of me.

After moving the sources around I started generating a build.gradle file. I added my external libs to the dependencies but the build did not succeed with a lot of trial and error.

At the end with the help of googling and StackOverflow I found out that order matters a lot in the build file:

First comes the section about plugins for the build system itself:

buildscript {
repositories {
mavenCentral()
}

dependencies {
classpath 'com.android.tools.build:gradle:0.4.2'
}
}

Settings herein only apply to the build system, but not to the project to be built.

After this we can use those plugins (the android one)

apply plugin: 'android'
apply plugin: 'idea'


And only when those plugins are loaded, we can start talking about the project itself and add dependencies

repositories {
mavenCentral()
}
dependencies {
compile 'org.codehaus.jackson:jackson-core-asl:1.9.12'
compile 'org.codehaus.jackson:jackson-mapper-asl:1.9.12'
}

android {
compileSdkVersion 17
buildToolsVersion "17.0"
}


I guess for someone experienced with Gradle, this is no news, but it took me quite some time to figure out especially as the documentation mentions all the parts but not on a larger example.

The file layout now looks like this:

Bildschirmfoto 2013 06 29 um 22 07 02


One caveat is that the external libraries are not automagically yet loaded into the IDE - one still has to manually add them in the project structure wizard.

The last thing is then to mark build/source/r/debug as source directory to also get completion for the R.* entries.

Tuesday, June 25, 2013

RHQ 4.8 released



RHQ 4.8 has been released with another batch of new features and bug fixes.
Major changes since RHQ 4.7 are
  • New storage backend for metrics based on Cassandra:

    Numerical metric data is now stored in a Cassandra cluster to improve scalability for larger deployments with many metrics.
    Architecture overview cassandra color

    The intention is to move more mass data into Cassandra into the future (events, calltime data etc). We just had to start somewhere. If you are upgrading from an earlier version of RHQ, a utility to migrate metrics gathered is provided.
  • More improvements to the new charts
    Bildschirmfoto 2013 06 25 um 10 57 07

    The selection of the time period has been changed:
    • There is now a button bar at the top to quickly select the timeframe to display
    • If you hover with the mouse between two "bars" you will see a little crosshair, that starts the
      selection mode - drag it over a series of bars to zoom in. Mike Thompson has created a YouTube video that explains this.
  • New installation process:

    With the above change of the backend storage, we have changed the installation process again to reduce the complexity to install RHQ. The new process basically goes:
    • unzip rhq-server-4.8.0.zip
    • cd rhq-server-4.8.0
    • edit bin/rhq-server.properties
    • bin/rhqctl install (--storage-data-root-dir=<some directory>

    For upgrades the process is very similar: you run unzip the new binaries and then run bin/rhqctl upgrade --from-server-dir=<old server>
    Please consult the installation and upgrade guides before starting.
  • REST-api is now (mostly) stable and supports paging on (some) resources. The documentation is also up on SourceForge and the rhq-samples project has a special section on examples for the REST-api.


As always there have been many more smaller improvements and bug fixes - many thanks to everyone who has contributed via bug report, comments, discussions on #rhq and/or via code contributions.

Please check the full release notes for details. They also contain a list of commits.

RHQ is an extensible tool to monitory your infrastructure of machines and applications, alert operators on user defined conditions, configure resources and run operations on them from a central web-based UI. Other ways of communicating with RHQ include a command line interface and a REST-api.

You can download the release from source forge.


As mentioned above, the old installer is gone, so make sure to read
the wiki document describing how to use the new installer.

Maven artifacts are available from the JBoss Nexus repository and should also be available from Central.


Please give us feedback, be it in Bugzilla, Mailing lists or the forum. Or just join us on IRC at irc.freenode.net/#rhq.

And as I have been asked recently: yes we are happy to accept code contributions -- be it for the RHQ core, as well as plugins or for the samples project. Also if you e.g. have written a plugin, share a pointer to it, which we can then share on the wiki etc.

Thursday, May 23, 2013

Support for paging in the RHQ REST-api (updated)

I have just pushed some changes to RHQ master that implement paging for (most) of the REST-endpoints that return collections of data.



If you remember, I was asking the other day if there is a "standard" way to do this. Basically there are two
"big options":
  1. Put paging info in the Http-Header
  2. Put paging info in the body of the message


While I think paging information are meta data that should not "pollute" the body, I understand the arguments from the JavaScript side that says that they don't easily have access to the http headers
within AJAX requests. So what I have now done is to implement both:
  1. Paging in the http header: this is the standard way that you get if you just request the "normal" media types of application/xml or application/json (output from running RestAssured):

    [update]
    My colleague Libor pointed out that the links do not match with format from RFC 5988 Web Linking, which is now fixed.
    [/update]

    Request method: GET
    Request path: http://localhost:7080/rest/resource
    Query params: page=1
    ps=2
    category=service
    Headers: Content-Type=*/*
    Accept=application/json
    HTTP/1.1 200 OK
    Server=Apache-Coyote/1.1
    Pragma=No-cache
    Cache-Control=no-cache
    Expires=Thu, 01 Jan 1970 01:00:00 CET
    Link=<http://localhost:7080/rest/resource?ps=2&category=service&page=2>; rel="next"
    Link=<http://localhost:7080/rest/resource?ps=2&category=service&page=0>; rel="prev"
    Link=<http://localhost:7080/rest/resource?ps=2&category=service&page=152>; rel="last"
    Link=<http://localhost:7080/rest/resource?page=1&ps=2&category=service>; rel="current"
    Content-Encoding=gzip
    X-collection-size=305
    Content-Type=application/json
    Transfer-Encoding=chunked
    Date=Thu, 23 May 2013 07:57:38 GMT

    [
    {
    "resourceName": "/",
    "resourceId": 10041,
    "typeName": "File System",
    "typeId": 10013,
    …..
  2. Paging as part of the body - there the "real collection" is wrapped inside an object that also contains paging meta data as well as the paging links. To request this representation, a media type of application/vnd.rhq.wrapped+json needs to be used (and this is only available with JSON at the moment):

    Request method: GET
    Request path: http://localhost:7080/rest/resource
    Query params: page=1
    ps=2
    category=service
    Headers: Content-Type=*/*
    Accept=application/vnd.rhq.wrapped+json
    Cookies:
    Body:
    HTTP/1.1 200 OK
    Server=Apache-Coyote/1.1
    Pragma=No-cache
    Cache-Control=no-cache
    Expires=Thu, 01 Jan 1970 01:00:00 CET
    Content-Encoding=gzip
    Content-Type=application/vnd.rhq.wrapped+json
    Transfer-Encoding=chunked
    Date=Thu, 23 May 2013 07:57:40 GMT

    {
    "data": [
    {
    "resourceName": "/",
    "resourceId": 10041,
    "typeName": "File System",
    "typeId": 10013,

    ],
    "pageSize": 2,
    "currentPage": 1,
    "lastPage": 152,
    "totalSize": 305,
    "links": [
    {
    "next": {
    "href": "http://localhost:7080/rest/resource?ps=2&category=service&page=2"
    }
    },

    }

Please try this if you can. We want to get that into a "finished" state (as for the whole REST-Api) for RHQ 4.8

Thursday, May 16, 2013

Creating a delegating login module (for JBoss EAP 6.1 )



[ If you only want to see code, just scroll down ]

Motivation


In RHQ we had a need for a security domain that can be used to secure the REST-api and its web-app via container managed security. In the past I had just used the classical DatabaseServerLoginModule to authenticate against the database.

Now does RHQ also allow to have users in LDAP directories, which were not covered by above module. I had two options to start with:
  • Copy the LDAP login modules into the security domain for REST
  • Use the security domain for the REST-api that is already used for UI and CLI


The latter option was of course favorable to prevent code duplication, so I went that route. And failed.

I failed because RHQ was on startup dropping and re-creating the security domain and the server was detecting this and complaining that the security domain referenced from the rhq-rest.war was all of a sudden gone.

So next try: don't re-create the domain on startup and only add/remove the ldap-login modules (I am talking about modules, because we actually have two that we need).

This also did not work as expected:
  • The underlying AS sometimes went into reload needed mode and did not apply the changes
  • When the ldap modules were removed, the principals from them were still cached
  • Flushing the cache did not work and the server went into reload-needed mode


So what I did now is to implement a login module for the rest-security-domain that just delegates to another one for authentication and then adds roles on success.

This way the rhq-rest.war has a fixed reference to that rest-security-domain and the other security domain could just be handled as before.

Implementation



Let's start with the snippet from the standalone.xml file describing the security domain and parametrizing the module


<security-domain name="RHQRESTSecurityDomain" cache-type="default">
<authentication>
<login-module code="org.rhq.enterprise.server.core.jaas.DelegatingLoginModule" flag="sufficient">
<module-option name="delegateTo" value="RHQUserSecurityDomain"/>
<module-option name="roles" value="rest-user"/>
</login-module>
</authentication>
</security-domain>


So this definition sets up a security domain RHQRESTSecurityDomain which uses the DelegatingLoginModule that I will describe in a moment. There are two parameters passed:
  • delegateTo: name of the other domain to authenticate the user
  • roles: a comma separated list of modules to add to the principal (and which are needed in the security-constraint section of web.xml


For the code I don't show the full listing; you can find it in git.

To make our lives easier we don't implement all functionality on our own, but extend
the already existing UsernamePasswordLoginModule and only override
certain methods.


public class DelegatingLoginModule extends UsernamePasswordLoginModule {


First we initialize the module with the passed options and create a new LoginContext with
the domain we delegate to:

@Override
public void initialize(Subject subject, CallbackHandler callbackHandler,
Map<String, ?> sharedState,
Map<String, ?> options)
{
super.initialize(subject, callbackHandler, sharedState, options);

/* This is the login context (=security domain) we want to delegate to */
String delegateTo = (String) options.get("delegateTo");

/* Now create the context for later use */
try {
loginContext = new LoginContext(delegateTo, new DelegateCallbackHandler());
} catch (LoginException e) {
log.warn("Initialize failed : " + e.getMessage());
}



The interesting part is the login() method where we get the username / password and store it for later, then we try to log into the delegate domain and if that succeeded we tell super that we had success, so that it can do its magic.


@Override
public boolean login() throws LoginException {
try {
// Get the username / password the user entred and save if for later use
usernamePassword = super.getUsernameAndPassword();

// Try to log in via the delegate
loginContext.login();

// login was success, so we can continue
identity = createIdentity(usernamePassword[0]);
useFirstPass=true;

// This next flag is important. Without it the principal will not be
// propagated
loginOk = true;

the loginOk flag is needed here so that the superclass will call LoginModule.commit() and pick up the principal along with the roles.
Not setting this to true will result in a successful login() but no principal
is attached.


if (debugEnabled) {
log.debug("Login ok for " + usernamePassword[0]);
}

return true;
} catch (Exception e) {
if (debugEnabled) {
LOG.debug("Login failed for : " + usernamePassword[0] + ": " + e.getMessage());
}
loginOk = false;
return false;
}
}


After success, super will call into the next two methods to obtain the principal and its roles:

@Override
protected Principal getIdentity() {
return identity;
}


@Override
protected Group[] getRoleSets() throws LoginException {

SimpleGroup roles = new SimpleGroup("Roles");

for (String role : rolesList ) {
roles.addMember( new SimplePrincipal(role));
}
Group[] roleSets = { roles };
return roleSets;
}


And now the last part is the Callback handler that the other domain that we delegate to will use to obtain the credentials from us. It is the classical JAAS login callback handler. One thing that first totally confused me was that this handler was called several times during login and I thought it is buggy. But in fact the number of times it is called corresponds to the number of login modules configured in the RHQUserSecurityDomain.


private class DelegateCallbackHandler implements CallbackHandler {
@Override
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {

for (Callback cb : callbacks) {
if (cb instanceof NameCallback) {
NameCallback nc = (NameCallback) cb;
nc.setName(usernamePassword[0]);
}
else if (cb instanceof PasswordCallback) {
PasswordCallback pc = (PasswordCallback) cb;
pc.setPassword(usernamePassword[1].toCharArray());
}
else {
throw new UnsupportedCallbackException(cb,"Callback " + cb + " not supported");
}
}
}
}



Again, the full code is available in the RHQ git repository.

Debugging (in EAP 6.1 alpha or later )



If you write such a login module and it does not work, you want to debug it. Started with the usual means to find out that my login() method was working as expected, but login just failed. I added print statements etc to find out that the getRoleSets() method was never called. But still everything looked ok. I did some googling and found this good wiki page. It is possible to tell a web app to do audit logging


<jboss-web>
<context-root>rest</context-root>
<security-domain>RHQRESTSecurityDomain</security-domain><disable-audit>false</disable-audit>


This flag alone is not enough, as you also need an appropriate logger set up, which is explained on
the wiki page. After enabling this, I saw entries like


16:33:33,918 TRACE [org.jboss.security.audit] (http-/0.0.0.0:7080-1) [Failure]Source=org.jboss.as.web.security.JBossWebRealm;
principal=null;request=[/rest:….

So it became obvious that the login module did not set the principal. Looking at the code in the superclasses then led me to the loginOk flag mentioned above.

Now with everything correctly set up the autit log looks like


22:48:16,889 TRACE [org.jboss.security.audit] (http-/0.0.0.0:7080-1)
[Success]Source=org.jboss.as.web.security.JBossWebRealm;Step=hasRole;
principal=GenericPrincipal[rhqadmin(rest-user,)];
request=[/rest:cookies=null:headers=authorization=user-agent=curl/7.29.0,host=localhost:7080,accept=*/*,][parameters=][attributes=];

So here you see that the principal rhqadmin has logged in and got the role rest-user assigned, which is the one matching in the security-constraint element in web.xml.

Further viewing



I've presented the above as a Hangout on Air. Unfortunately G+ muted me from time to time when I was typing while explaining :-(

After the video was done I got a few more questions that at the end made me rethink the startup phase for the case that the user has a previous version of RHQ installed with LDAP enabled. In this case, the installer will still install the initial DB-only RHQUserSecurityDomain and then in the startup bean
we check if a) LDAP is enabled in system settings and b) if the login-modules are actually present.
If a) matches and they are not present we install them.

This Bugzilla entry also contains more information about this whole story.

Tuesday, May 07, 2013

RHQ 4.7 released



RHQ 4.7 has been released and one of the two major features in this release are
the new charts that finally have replaced the year old charts that we had since the
start of RHQ project:


Screenshot with new charts


The new charts are implemented on with the awesome D3.js toolkit as I've written before.

The other big change is an upgrade of the underlying app server to JBoss EAP 6.1 alpha 1.

As always there have been many more smaller improvements and bug fixes.

Please check the full release notes for details. They also contain a list of commits.

RHQ is an extensible tool to monitory your infrastructure of machines and applications, alert operators on user defined conditions, configure resources and run operations on them from a central web-based UI. Other ways of communicating with RHQ include a command line interface and a REST-api.

You can download the release from source forge.


As mentioned above, the old installer is gone, so make sure to read
the wiki document describing how to use the new installer.

Maven artifacts are available from the JBoss Nexus repository and should also be available from Central.



Please give us feedback, be it in Bugzilla, Mailing lists or the forum. Or just join us on IRC at irc.freenode.net/#rhq.

Thursday, April 25, 2013

Building WildFly

After Red Hat has renamed TASFKAJB to WildFly, the git repository has also been moved to https://github.com/wildfly/wildfly.

Building WildFly is pretty similar to what it has been before
  • Fork the WildFly repository to your own on GitHub
  • Clone your repository to the local disk (which is in my case)
    git clone git@github.com:pilhuhn/wildfly.git
  • Change into the created wildfly directory
    cd wildfly
  • Run maven to build the server
    mvn install

The server is then created inside the build/target directory

$ pwd
/devel/wildfly/build/target
$ ls
antrun generated-configs wildfly-8.0.0.Alpha1-SNAPSHOT

change into this directory and start the server via bin/standalone.sh

Done :-)

Wednesday, March 13, 2013

Awesome new graphs in RHQ - based on d3.js



Mike Thompson has yesterday presented the latest and greatest version of the new graphs for RHQ in a video on YouTube. Shortly after he has committed the results of his huge work into RHQ master branch.


While this work is not yet finished, it is the result of the work started by Denis Krusko in last years Google Summer of Code. At the moment both the old and new graphs are can be looked at in the UI, so that you can compare them and potentially report non-matches.


Here are some screenshots to foster your appetite:


Popup chart for a single metric
Popup chart for a single metric

Individual metric
Individual metric on the monitoring tab


As the subject already says are those graphs made with the help of the awesome D3.js framework - I let Mike chime in to describe in more details what he and Denis had to do to get this to work inside GWT+SmartGWT.

I've uploaded a snapshot from master as of this morning (my time) of this from our CI server onto SourceForge for you to try. THIS IS NOT FOR PRODUCTION.


There is a known issue where the red bar shows "..global exception.." this is harmless and we will fix that anyway. Also the graph portlets in the dashboard don't honor the column width yet.


Please do not forget to report bugs (if there are any :-)

Thursday, March 07, 2013

Sorry Miss Jackson -- or how I loved to do custom Json serialization in AS7 with RestEasy

Who doesn't remember the awesome "Sorry Miss Jackson" video ?

Actually that doesn't have to do anything with what I am talking here -- except that RestEasy inside JBossAS7 is internally using the Jackson json processing library. But let me start from the beginning.

Inside the RHQ REST api we have exposed links (in json representation) like this:

{ 
"rel":"self",
"href":"http://...
}

which is the natural format when a Pojo

class Link {
String rel;
String href;
}

is serialized (with the Jackson provider in RestEasy).

Now while this is pretty cool, there is the disadvantage that if you need the link for a rel of 'edit', you had to load the list of links, iterate over them and check for each link if its rel is 'edit' and then take the value of the href.

And there is this Bugzilla entry.

I started investigating this and thought, I will use a MessageBodyWriter and all is well. Unfortunately MessageBodyWriters do not work recursively and thus cannot format Link objects as part of another object in a special way.

Luckily I have done custom serialization with Jackson in the as7-plugin before, so I tried this, but the Serializer was not even instantiated. More trying and fiddling and a question on StackOverflow led me to a solution by copying the jackson libraries (core, mapper and jaxrs) into the lib directory of the RHQ ear and then all of a sudden the new serialization worked. The output is now

{ "self":
{ "href" : "http://..." }
}


So far so nice.

Now the final riddle was how to use the jackson libraries that are already in user by the resteasy-jackson-provider. And this was solved by adding respective entries to jboss-deployment-structure.xml, which ends up in the META-INF directory of the rhq.ear directory.

The content in this jboss-deployment-structure.xml then looks like this (abbreviated):


<sub-deployment name="rhq-enterprise-server-ejb3.jar">
<dependencies>
....
<module name="org.codehaus.jackson.jackson-core-asl" export="true"/>
<module name="org.codehaus.jackson.jackson-jaxrs" export="true"/>
<module name="org.codehaus.jackson.jackson-mapper-asl" export="true"/>
</dependencies>
</sub-deployment>


AS7 is still complaining a bit at startup:


16:48:58,375 WARN [org.jboss.as.dependency.private] (MSC service thread 1-3) JBAS018567: Deployment "deployment.rhq.ear.rhq-enterprise-server-ejb3.jar" is using a private module ("org.codehaus.jackson.jackson-core-asl:main") which may be changed or removed in future versions without notice.


but for the moment the result is good enough - and as we do not regularly change the underlying container for RHQ, this is tolerable.

And of course I would be interested in an "official" solution to that -- other than just doubling those jars into my application (and actually I think this may even complicate the situation, as (re-)using the jackson jars that RestEasy inside as7 uses, guarantees that the version is compatible).

Tuesday, February 26, 2013

RHQ 4.6 released


As I've mentioned before, the RHQ team has been very busy since RHQ 4.5.1 (and actually already before that) and has switched the application server it uses to JBoss AS 7.1. Directly after the switchover we have posted a first alpha version.


Now after more work and fixes, we are happy to provide the release 4.6 of RHQ, that has all the issues resolved that arose from the switch. Features of this release are:

  • The internal app server is now JBossAS 7.1.1
  • GWT has been upgraded to version 2.5
  • There is a new installer (this has also changed since the 4.6 alpha release)
  • The REST-Api has been enhanced
  • Korean translations have been added (contributed by SungUk Jeon)
  • Webservices have been removed
  • Building RHQ now requires Java7, but it will still run on Java6
.

See the full release notes for details. They also contain a list of commits.

You can download the release from source forge.


As mentioned above, the old installer is gone, so make sure to read
the wiki document describing how to use the new installer.

Maven artifacts are available from the JBoss Nexus repository and should soon also be available from Central.

We also like to say thank you to our contributors for this release:

  • Jürgen Hoffmann
  • Richard Hensman
  • SungUk Jeon


Please give us feedback, be it in Bugzilla, Mailing lists or the forum. Or just join us on IRC at irc.freenode.net/#rhq.

Tuesday, February 19, 2013

Best practice for paging in RESTful APIS (updated)?

In the RHQ-REST api, we return individual objects and also collections of objects. Some of those collections are rather small (number of platforms), while others can grow a lot (number of resources in total or number of alerts fired). For the latter it is advised to do some paging and not return the full result set in one go. Some of the reasons are:
  • Memory consumption in server and client
  • Bandwidth need to transfer the data.
  • Latency to transfer huge amounts of data over slow networks


Inside RHQ we have the concept of a PageList<?> where an internal PageControl object defines the page size and other criteria like sorting. The PageList then only contains the objects from a certain page. I think this is a pretty common setup.

And here is where my question comes:

What is the "best-practice" to represent such a PageList in a RESTful api? So far I have seen two major ways:
  1. Add a Link: header that contains the prev and next relations. This is what RFC 5988 suggests and what projects like AeroGear use. The advantage here is that the body still contains the "raw" data and not meta data. And for both cases of 'single' object and 'collection' the data is at the 'root' of the body. Also paging is available for HEAD requests.

    On the other hand, it may get a bit harder for some client code (JavaScript, jQuery) to access the header and make use of the paging links
  2. Put the prev and next relations in the body of the request next to the collection. This has the advantage that there is no need to parse the http header. Disadvantage is that the real payload is now shifted "one level down" for collections.


    I sort of see the paging links as meta-data and think that this should not be mixed with the payload. Now a colleague of mine said: "Isn't that a state change link for the collection like the 'rel=edit' for a single object?". This sounds odd, but can't be denied.

Actually I have also seen mentioning the use of cookies to send the paging information, but that looks very non-transparent to me, so I am not considering this at all.

Just to be clear: I am explicitly talking about paging of collections and not about affordances of individual objects.

So are there established best practices? How do others do it?

If going for the Link: header: would people rather like to see multiple Link headers (see RFC 2616), one for each relation:

Link: <http://foo/?page=3>; rel='next'
Link: <http://foo/?page=1>; rel='prev'

or rather the combined way:

Link: <http://foo/?page=3>; rel='next', Link: <http://foo/?page=1>; rel='prev'

that is listed in RFC 5988?

[update]

I just saw that URLConnection.getHeaderField(name) does not support the multiple Link: headers as it only returns the last occurrence:

If called on a connection that sets the same header multiple times with possibly different values, only the last value is returned.


While there may be other ways to access all the Link: headers, this is a too obvious pitfall, that can be prevented by not using that style.

Tuesday, January 29, 2013

When an annotation processor does not process...

In RHQ we have the rest-docs-generator that takes annotations from the code and turns them into xml. As you may have guessed from the last sentence this is implemented as a Java annotation processor and used to work quite well.

Yesterday I wanted to run it on the latest version of the code (we don't run it on every build, as it takes some time with all the backend processing) and it failed. Looking at the processor itself and its test runs did not reveal anything, as they continued to work.

As we use the maven processor plugin, I thought this may fail because we now use Java7 to build (but then I used that before too) and upgraded the plugin, but that did not help. In the end I switched to the maven compiler plugin, which spit out a ton of errors and stacktraces. It turned out, that one of the classes on the classpath had an unsatisfied dependency and the annotation processor just "died" silently before.

After adding the dependency, the errors were gone, but the Processor.init() method was still not called and no processing happened. Looking through tons of output I found this:


Processor <hidden to protect the innocent> matches
[javax.persistence.PersistenceContext,
[……]
javax.ws.rs.Consumes,
javax.interceptor.Interceptors, com.wordnik.swagger.annotations.Api,
javax.ejb.Startup] and returns true.

This "returns true" together with the list of annotations I am interested in means that this other processor that is now in the classpath (probably pulled in when we switched from as4 to as7) swallows all those annotations, so that they are not passed to our processor.

The solution in my case was to explicitly name the rest-docs-generator in the compiler plugin configuration and not to rely on the auto-discovery (I did that in the processor plugin already, but it looks like this had no effect in my case) :


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<annotationProcessors>
<processor>org.rhq.helpers.rest_docs_generator.ClassLevelProcessor</processor>
</annotationProcessors>

<proc>only</proc>
<compilerArguments>
<AtargetDirectory>${project.build.directory}/docs/xml</AtargetDirectory>
<AmodelPkg>org.rhq.enterprise.server.rest.domain</AmodelPkg>
<!-- enable the next line to have the output of the processor shown on console -->
<!--<Averbose>true</Averbose>-->
</compilerArguments>
<!-- set the next to true to enable verbose output of the compiler plugin -->
<!--<verbose>false</verbose>-->
</configuration>
[…]


Note that to find the above "Processor .. matches .." output, the compiler plugin must be set to verbose.

I meanwhile also heard that a newer version of that "rouge" processor only looks after its annotations now.

Monday, January 28, 2013

Recap of RHQ @ LJCs first Meet a Project event

RHQ-logoLJC logo

Last thursday I was in London at the London Java Community (LJC) first "Meet a project" event.

Getting there started with an aborted take-off in Stuttgart. The plane accelerated on the runway and then all of a sudden did a full breaking. We exited onto the movement area, parked there for some minutes and then circled back on to the runway to finally take off. But anyway, I arrived safe and early enough in London.

The "Meet a Project" (MaP) event was held at the University College London campus. When I arrived a few attendees were already there and soon after Barry, the organizer joined too. We started in one room to explain "the rules" and then split into three rooms.

Me in the very last session of 6

Myself talking at the back table in the last of the six sessions.


To explain how MaP works, one can think of "speed dating for projects". There were six projects present and thus six groups have been formed, each sitting around a table. The project ambassadors (like myself) then spend 15 mins per table to present the project, explain about open source in general and give hints where and how the attendee may get involved and help the project.

As I did not know what to expect (and as this was the first incarnation of the MaP, no one really did), I created a small slide show to explain about RHQ, and had a handout prepared to give to interested attendees. For the individual sessions I always took the full 15mins. Almost all attendees were very interested and I distributed over 20 handouts.

After the sessions were over at around 8:30pm we went to a hotel bar to do some socializing, and then Manik, Davide, Sanne, Richard Warburton from jClarity and myself went to an indian restaurant to finally have dinner.

I cannot yet tell if the event was a success in the sense that RHQ project really got any new contributors. What I can tell is that the format of "speed dating for projects" felt really good to me, as with the small groups one could have an intensive session with direct feed back if the concepts were clear. With around 50 attendees I am happy to have given away 20 handouts. And while socializing a few attendees told me that they have never heard of RHQ before, and that has been good to hear about. And one lady switched tables to be able to listen to me before she had to leave earlier :)

Wednesday, January 23, 2013

Monitoring the monster

RHQ-logoJDF logo

The classical RHQ setup assumes an agent and agent plugins being present on a machine ("platform" in RHQ-speak). Plugins then communicate with the managed resource (e.g. an AS7 server); ask it for metric values or run operations (e.g. "reboot").

This article shows an alternative way for monitoring applications at the example of the Ticket Monster application from the JBoss Developer Framework.


The communication protocol between the plugin and the managed resource is dependent on the capabilities of that resource. If the resource is a java process, JMX is often used. In the case of JBoss AS 7, we use the DMR over http protocol. For other kinds of resources this could also be file access or jdbc in case of databases. The next picture shows this setup.

RHQ classic setup


The agent plugin talks to the managed resource and pulls the metrics from it. The agent collects the metrics from multiple resources and then sends them as batches to the RHQ server, where it is stored, processed for alerting and can be viewed in the UI or retrieved via CLI and REST-api.

Extending

The above scenario is of course not limited to infrastructure and can also be used to monitor applications that sit inside e.g. AS7. You can write a plugin that uses the underlying connection to talk to the resource and gather statistics from there (if you build on top of the jmx or as7 plugin, you don't necessarily need to write java-code).

This also means that you need to add hooks to your application that export metrics and make them available in the management model (the MBean-Server in classical jre; the DMR model in AS7), so that the plugin can retrieve them.

Pushing from the application

Another way to monitor application data is to have the application push data to the RHQ-server directly. In this case you still need to have a plugin descriptor in order to define the meta data in the RHQ server (what kinds of resources and metrics exist, what units do the metrics have etc.). In this case you only need to define the descriptor and don't write java code for the plugin. This works by inheriting from the No-op plugin. In addition to that, you can also just deploy that descriptor as jar-less plugin.

The next graphic shows the setup:

RHQ with push from TicketMonster


In this scenario you can still have an agent with plugins on the platform, but this is not required (but recommended for basic infrastructure monitoring). On the server side we deploy the ticket-monster plugin descriptor.

The TicketMonster application has been augmented to push each booking to the RHQ server as two metrics for total number of tickets sold and the total price of the booking (BookingService.createBooking()).


@Stateless
public class BookingService extends BaseEntityService<Booking> {

@Inject
private RhqClient rhqClient;

public Response createBooking(BookingRequest bookingRequest) {
[…]
// Persist the booking, including cascaded relationships
booking.setPerformance(performance);
booking.setCancellationCode("abc");
getEntityManager().persist(booking);
newBookingEvent.fire(booking);
rhqClient.reportBooking(booking);
return Response.ok().entity(booking)
.type(MediaType.APPLICATION_JSON_TYPE).build();


This push happens over a http connection to the REST-api of the RHQ server, which is defined inside the RhqClient singleton bean.

In this RhqClient bean we read the rhq.properties file on startup to determine if there should be any reporting at all and how to access the server. If reporting is enabled we try to find the platform we are running on and if the RHQ server does not know about it, create it. On top of the platform we create the TicketMonster instance. This is safe to do multiple times, as would be the platform creation - I am looking for an existing platform where an agent might perhaps already monitor the basic data like cpu usage or disk utilization.

The reporting of the metrics then looks like this:


@Asynchronous
public void reportBooking(Booking booking) {

if (reportTo && initialized) {
List<Metric> metrics = new ArrayList<Metric>(2);

Metric m = new Metric("tickets",
System.currentTimeMillis(),
(double) booking.getTickets().size());
metrics.add(m);

m = new Metric("price",
System.currentTimeMillis(),
(double) booking.getTotalTicketPrice());
metrics.add(m);

sendMetrics(metrics, ticketMonsterServerId);
}
}


Basically we construct two Metric objects and then send them to the RHQ-Server. The second parameter is the resource id of the TicketMonster server resource in the RHQ-server, which we have obtained from the create-request I've mentioned above.

A difference to the classical setup where the MeasurementData objects inside RHQ always have a so called schedule id associated is that in the above case we pass the metric name as is appears in the deployment descriptor and let the RHQ server sort out the schedule id.


<metric property="tickets" dataType="measurement"
displayType="summary" description="Total number tickets sold"/>
<metric property="price"
displayType="summary" description="Total selling price"/>


And voilà this what the sales look like that are created from the Bot inside TicketMonster:

Bildschirmfoto 2013 01 23 um 11 16 23
Bookings in the RHQ-UI


The display interval has been set to "last 12 minutes". If you see a bar, that means that within the timeslot of 12mins/60 slots = 12sec, there were multiple bookings. In this case the bar shows the max and min value, while the little dot inside shows the average (via Rest-Api it is still possible to see the individual values for the last 7 days).

Why would I want to do that?

The question here is of course why would I want to send my business metrics to the RHQ server, that is normally used for my infrastructure monitoring?

Because we can! :-)

Seriously such business metrics are also able to indicate issues. If e.g. the number of ticket bookings is unusually high or low, this can also be a source of concern and warrants an alert. Take the example of E-Shops that sell electronics and where it happened that someone made a typo and offered laptops that are normally sold at €1300 and are now selling at €130. That news is quickly spread via social networks and sales triple over their normal numbers. Here monitoring the number of laptops sold can be helpful.

The other reason is that RHQ with its user concept allows to set up special users that only have (read) access to the TicketMonster resources, but not to other resources inside RHQ. This way it is possible to give the business people access to the metrics from monitoring the ticket sales.

Resource tree all resourcesResourcetree ticketmonster only user


On the left you see the resource tree below the platform "snert" with all the resources as e.g. the 'rhqadmin' user sees it. On the right side, you see the tree as a user that only has the right to see the TicketMonster server (™").

TODOs

The above is a proof of concept to get this started. There are still some things left to do:

  • Create sub-resorces for performances and report their booking separately
  • Regularly report availability of the TicketMonster server
  • Better separate out the internal code that still lives in the RhqClient class
  • Get the above incorporated into TicketMonster propper - currently it lives in my private github repo
  • Decide how to better handle an RHQ server that is temporarily unavailable
  • Get Forge to create the "send metric / … " code automatically when a class or field has some special annotation for this purpose. Same for the creation of new items like Performances in the ™ case.


If you want to try it, you need a current copy of RHQ master -- the upcoming RHQ 4.6 release will have some of the changes on RHQ side that are needed. The RHQ 4.6 beta does not yet have them.

Wednesday, January 16, 2013

RHQ 4.6 beta released


The RHQ team has been very busy since RHQ 4.5.1 (and actually already before that) and has switched the application server it uses to JBoss AS 7.1. Directly after the switchover we have posted a first alpha version.


Now after more work and fixes, we are happy to provide a beta version of RHQ 4.6, that has all the issues resolved that arose from the switch. Features of this release are

  • The internal app server is now JBossAS 7.1
  • GWT has been upgraded to version 2.5
  • There is a new installer (this has also changed since the 4.6 alpha release)
  • The REST-Api has been enhanced
  • Korean translations have been added (contributed by SungUk Jeon)


You can download the release from source forge.


This wiki document describes how to use the new installer.

The first version of the download did unfortunately not contain the Korean locale -- that is now fixed. If you already have downloaded the zip and do not need the Korean locale, then you don't need to re-download.

Please try the release and give us feedback, be it in Bugzilla, Mailing lists or the forum.

AlertDefinitions in the RHQ REST-Api


I have in the last few days added the support for alert definitions to the REST-Api in RHQ master.
Although this will make it into RHQ 4.6 , it is not the final state of affairs.

On top of the API implementation I have also written 27 tests (for the alert part; at the time of writing this posting) that use Rest Assured to test the api.

Please try the API, give feedback and report errors; if possible as Rest Assured tests, to increase the
test coverage and as an easy way to reproduce your issues.

I think it would also be very cool if someone could write a script in whatever language that exports definitions and applies them to a resource hierarchy on a different server (e.g from test to production)

Monday, January 14, 2013

Korean translations contributed to RHQ

Login screen


Thanks to SungUk Jeon we now have Korean translations of the RHQ ui. Those will first appear in the upcoming RHQ 4.6 release

Dashboard
Individual resource


If Korean ist not your default locale, you can switch to it by appending ?locale=ko to the url of the RHQ-ui just like http://localhost:7080/coregui?locale=ko.

Thanks a lot, SungUk

Thursday, January 10, 2013

Testing REST-apis with Rest Assured

The REST-Api in RHQ is evolving and I had long ago started writing some integration tests against it.
I did not want to do that with pure http calls, so I was looking for a testing framework and found one that I used for some time. I tried to enhance it a bit to better suit my needs, but didn't really get it to work.

I started searching again and this time found Rest Assured, which is almost perfect. Let's have a look at a very simple example:


expect()
.statusCode(200)
.log().ifError()
.when()
.get("/status/server");


As you can see, this is a fluent API that is very expressive, so I don't really need to explain what the above is supposed to do.

In the next example I'll add some authentication


given()
.auth().basic("user","name23")
.expect()
.statusCode(401)
.when()
.get("/rest/status");


Here we add "input parameters" to the call, which are in our case the information for basic authentication, and expect the call to fail with a "bad auth" response.

Now it is tedious to always provide the authentication bits throughout all tests, so it is possible to tell Rest Assured to always deliver a default set of credentials, which can still be overwritten as just shown:


@Before
public void setUp() {
RestAssured.authentication = basic("rhqadmin","rhqadmin");
}


There are a lot more options to set as default, like the base URI, port, basePath and so on.

Now let's have a look on how we can supply other parameters


AlertDefinition alertDefinition = new AlertDefinition(….);

AlertDefinition result =
given()
.contentType(ContentType.JSON)
.header(acceptJson)
.body(alertDefinition)
.log().everything()
.queryParam("resourceId", 10001)
.expect()
.statusCode(201)
.log().ifError()
.when()
.post("/alert/definitions")
.as(AlertDefinition.class);


We start with creating a Java object AlertDefinition that we use for the body of the POST request. We define that it should be sent as JSON and that we expect JSON back. For the URL, a
query parameter with the name 'resourceId' and value '10001' should be appended.
We also expect that the call returns a 201 - created and would like to know the details is this is not the case.
Last but not least we tell RestAssured, that it should convert the answer back into an object of type AlertDefinition which we can then use to check constraints or further work with it.

Rest Assured offers another interesting and built-in way to check constraints with the help of XPath or it's JSON-counterpart JsonPath:


expect()
.statusCode(200)
.body("name",is("discovery"))
.when()
.get("/operation/definition");


In this (shortened) example we expect that the GET-call returns OK and an object that has a body field with the name 'name' and the value 'discovery'.

Conclusion

Rest Assured is a very powerful framework to write tests against a REST/hypermedia api. With its fluent approach and the expressive method names it allows to easily understand what a certain call is supposed to do and to return.

The Rest Assured web site has more examples and documentation. The RHQ code base now also has >70 tests using that framework.

Wednesday, January 09, 2013

A small blurb of what I am currently working on

I have not yet committed and pushed this to the repo and it is sill fragile and most likely to change - and still I want to share it with you:


$ curl -i -u rhqadmin:rhqadmin -X POST \
http://localhost:7080/rest/alert/definitions?resourceId=10001 \
-d @/tmp/foo -HContent-type:application/json
HTTP/1.1 201 Created
Server: Apache-Coyote/1.1
Location: http://localhost:7080/rest/alert/definition/10682
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 09 Jan 2013 21:41:10 GMT

{
"id":10682,
"name":"-x-test-full-definition",
"enabled":false,
"priority":"HIGH",
"recoveryId":0,
"conditionMode":"ANY",
"conditions":[
{"name":"AVAIL_GOES_DOWN",
"category":"AVAILABILITY",
"id":10242,
"threshold":null,
"option":null,
"triggerId":null}
],
"notifications":[
{"id":10432,
"senderName":"Direct Emails",
"config":{
"emailAddress":"enoch@root.org"}
}
]}


In the UI this looks like:

General view
Conditions
Notifications


Other features like dampening or recovery are not yet implemented.

To be complete, the content of /tmp/foo looks like this:


{
"id": 0,
"name": "-x-test-full-definition",
"enabled": false,
"priority": "HIGH",
"recoveryId": 0,
"conditionMode": "ANY",
"conditions": [
{
"id": 0,
"name": "AVAIL_GOES_DOWN",
"category": "AVAILABILITY",
"threshold": null,
"option": null,
"triggerId": null
}
],
"notifications": [
{
"id": 0,
"senderName": "Direct Emails",
"config": {
"emailAddress": "enoch@root.org"
}
}
]
}