Tuesday, August 24, 2010

Ambigous documentation and my BMT pain

For the project I'm working on I have an EJB that's operating in a Bean Managed Transaction or BMT. One of the beauties of a BMT is that you can extend your transaction timeout to be longer than the default JTA timeout value for long running DB operations. Sometimes you just have to shift a ton of data around with a minimal amount of domain logic and a BMT suits this nicely.

However I feel there is some ambiguity in the UserTransaction setTransactionTimeout method docs that has caused me a lot of grief.

Modify the timeout value that is associated with transactions started by the current thread with the begin method.

This leaves two possibilities in my mind:

  1. Start a transaction, and set it's timeout.

  2. Set the timeout for all transactions started by the UT, then begin transactions when you want to.


The gotcha is that while both are reasonable interpretations of the API docs, only the second works for me (caveat: I've only tested this on Oracle's OC4J as part of Fusion 10G; other app servers may behave differently).

Needless to say that I took the first option then wondered why I was getting timeouts consistent with the transaction timeout being the value of the server's JTA timeout value, not what I set on the transaction (after starting it). So after a lot of digging, and checking that the container was behaving itself and not starting a transaction when it wasn't, and making sure I was getting access to the correct resources, option two hit me like a brick. Flipping two lines of code, and it all worked perfectly.

When dealing with API doc ambiguities, the thought that "hey I'm reading this wrong" doesn't often enter your head until the very end, when there's blood on the desk from the crack in your skull.

Very annoying!

Wednesday, August 11, 2010

Which injection framework to use?

A recent debate has arisen among my colleagues about which dependency injection framework to use. Spring Vs Guice. I like the idea of annotations, and making the compiler do all the hard work of type checking. But I don't think that it's the solution to all our problems, as I briefly discussed earlier. I don't see how a purely annotation approach gives us the testing and environment flexibility. For anyone who's done more than a Web App that hangs on the back of Google's App Engine - we know how important that is.

Granted I don't know Guice very well so there may be a way around my reservations with a purely annotation approach.

Either way the needle lies.

Wednesday, August 4, 2010

Love that injection

The EJB 3 specification greatly simplified the world of EJBs by borrowing ideas from the Spring camp, the most powerful of which is the idea of POJOs coupled with annotations if you are living in a >= Java 5 world.

I've been playing around with Spring 3 lately, and I'm starting to find the value again in having an XML configuration/deployment descriptor within my EJB work. In all the annotation hype for EJB3, I wonder if we haven't lost sight of the flexibility that text based configuration can bring us in regards to injecting resources into our beans. Sure XML can be tedious but with the amount of great XML tooling available, is that a good reason not to use the XML options? We can get autocomplete, validation and syntax highlighting out of the box. I personally use Eclipse but other IDEs have the capability. Unfortunately we can see the annotation vs XML debate spiral mostly into personal preference. Sure having the configuration for an external resource in the code is great - if it rarely changes. If you have constant change (say on a per environment basis) perhaps a text based approach is better. Text based configuration is perhaps easier for us to create scenarios for our testing.

I've recently had the problem where I've had to change the JDBC driver for a DataSource due to environment issues. The ORM work is done by JPA. The annotation on the EntityManager points to a particular persistence unit. Great, because that's hardly ever going to change. If we do change the persistence unit, it may have an impact on the rest of the codes behaviour so going into the source is worth it. However there are two different persistence unit configurations - one for production (in the container) and the other for unit testing (out of the container). These are configured in the persistence.xml, and the code is oblivious. Just as it should be.

Unfortunately the wheel is reinvented too many times. Are we really doing anything different by the fact that we need configurable code? I admire an engineers ability to solve the problem. That's what we get paid for. Maybe bundle a properties file in the JAR, find it on the classpath (and load it). Maybe there is a property that we can then use to do a JNDI lookup to get a handle/reference to a DataSource. But why would you do that when you have the <resource-ref> tag available to you in the deployment descriptor.

<session>
<ejb-name>SomeBean</ejb-name>
<resource-ref>
<res-ref-name>${propertyName}</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
<injection-target>
<injection-target-class>org.foo.SomeBean
</injection-target-class>
<injection-target-name>dataSource
</injection-target-name>
</injection-target>
</resource-ref>
</session>
The beauty of course is that ${propertyName} can be substituted in by your build framework. The Ant <expandproperties> filter is great for this. You're also within the spirit of the framework, and hopefully your code is more maintainable, with no custom loading and no lookups. Of course you may have this requirement across multiple beans so you may have to replicate your custom code across those beans. Ugly!! If you're worried about people not being able to trace what's going on, place a comment over the class attribute "This is configured in the deployment descriptor". That of course is a no brainer because good developers always document their code for future readers ;).

On a technical note, there is an unfortunate pitfall to using deployment descriptor features like a <resource-ref> that can frustrate a developer and make him/her reach for their own custom solution. Say within a bean you have

@Resource(name="jdbc/SomeDS")
private javax.sql.DataSource dataSource;
What you're doing is essentially two things

  1. Requesting a DataSource object at the JNDI location jdbc/SomeDS

  2. Requesting that the reference be assigned to the class attribute dataSource

I've seen plenty of examples recently when the <injection-target> element has been forgotten. So step 1 is executed, but not step 2. Thus dataSource will not have been injected leading to a Null Pointer Exception. ^Developers love those.^

My current thinking at the moment is about how we can leverage technology like EJB3 and Spring to cleanly and efficiently solve clients needs. The first onus falls on guys like me, the developers to know our material. The second is that if we don't know the answer to a question (for example how can I change the JNDI location of my injected DataSource at build time) then we need to do our homework. I'm just as much guilty as the next guy for failing to do this. Doesn't mean I shouldn't get a rap over the knuckles for it.

Wednesday, July 28, 2010

Upgrading to Helios

I use Eclipse as my primary IDE (cue the IntelliJ fan bois). Over the years my personal install of Eclipse has become bloated and cluttered, so with Helios I thought I'd start afresh.

A pleasant experience. To go with simplicity I chose Eclipse Classic, and have added in all the features I need, rather than have a bloated monstrosity. I have enjoyed the Eclipse Marketplace having a "one stop shop" for those needed plugins.

The result, a clean easy to boot, lean on the memory IDE.

Friday, July 23, 2010

100% Nonsense

Meanwhile in totalitarian land the government has gone completely bonkers.

If Labor wants to boost it's chances of getting reelected it should dump the filter. If they can orchestrate a coup in a night, why can't they dump the stupid thing?!

Sunday, July 4, 2010

g-Cpan is my shepherd

The electronic TV guide for MythTV sucks! At least for Australia. I got put onto a little perl program called Shepherd. Basically "Shepherd provides reliable, high-quality Australian TV guide data by employing a flock of independent data sources." This is the good stuff! I read on about how Shepherd works and I was impressed enough to try it. The install instructions were pretty easy to follow. I installed version 1.4.0 Your mileage may vary with another version. If you're a Gentoo user you'll need the following mandatory packages installed for shepherd:

  • dev-lang/perl

  • dev-perl/libwww-perl

  • media-tv/xmltv

  • perl-core/IO-Compress

  • dev-perl/DateManip

  • dev-perl/Algorithm-Diff

  • dev-perl/Digest-SHA1

  • dev-perl/File-Find-Rule


The optional dependencies are listed on the Installation page, under Non-Distribution Specific They correspond to the following Gentoo packages.

  • dev-perl/Archive-Zip

  • dev-perl/DateTime-Format-Strptime

  • dev-perl/Crypt-SSLeay

  • dev-perl/GD

  • dev-perl/HTTP-Cache-Transparent

  • dev-perl/HTML-Parser

  • dev-perl/HTML-Tree

  • dev-perl/IO-String

  • dev-perl/XML-DOM

  • dev-perl/XML-Simple

  • perl-core/Digest-MD5

  • perl-core/Storable


However a mandatory dependency, List::Compare doesn't have a portage ebuild. The Shepherd page linked to a tool I'd never heard of before, g-cpan. g-cpan sits on top of CPAN, but builds ebuilds in your overlay, and installs the perl module in a Gentoo-esque way. If no overlay is present in your /etc/make.conf, the overlay will go into /var/tmp/g-cpan. My overlay is set to /usr/local/portage because I don't like the thought of ebuilds ending up in a temp directory. Running
$ g-cpan -i List::Compare
installed the needed perl module. Very nice. I couldn't find an ebuild for the optional JavaScript.pm module, so I used g-cpan for that as well.

Note for the JavaScript.pm install make sure that you setup up spidermonkey in the right way, otherwise the module wont install properly.

$ emerge spidermonkey
$ mkdir /usr/lib/MozillaFirefox/
$ ln -s /usr/include/ /usr/lib/MozillaFirefox/

While running the Shepherd install, I encountered a few errors.

  1. No mysql.txt Not sure how this file is created. I think it's a mythfrontend config file to specify how to connect to the backend. It's contents looks like:

    DBHostName=$HOSTNAME
    DBUserName=$USERNAME
    DBPassword=$PASSWORD
    DBName=mythconverg
    DBPort=0
    My suspicion is that since I'm using XBMC, this never got created. I created one in ~/.mythtv for the user that I run XMBC (and shepherd) with, and reran shepherd.

  2. Creation of the tv_grab_au symlink. If your user doesn't have sudo rights - which is a valid security situation, this will fail. Not hard to do yourself, but I wonder if Shepherd should assume sudo rights.

  3. Addition of Shepherd cron job to crontab. Failed due to lack of sudo rights. To do yourself (as root)
    $ crontab -e
    [Add crontab entry and save file]
    If you didn't get the crontab output from Shepherd, I put
    56 * * * * nice /usr/bin/mythfilldatabase 
    --graboptions '--daily'
    into my file (all as one line).


If you ran shepherd without installing the optional modules, you can rerun the install process using:
$ ~/.shepherd/shepherd --configure
Shepherd hasn't run yet, (the --history flag tells me so), I'll wait for an hour or so. Overall I'm pretty impressed with Shepherd so far. It's well documented, the installation process is easy, and provides good information to make decisions. Keep up the good work Shepherd dev(s).

Friday, July 2, 2010

MythBox is the key - not Asia

Apologies to any readers who don't play Risk for the post title.

I'm up to the stage where I need to get the recording functionality working. That was the major selling point to my wife. So I bought an Aver Media DVB-T 777 second hand from eBay on the recommendation of a mate. The reason he has it, and that he suggested it to me, was that it plays nicely with Linux as the Phillips SAA7134 chipset is well supported in the Linux kernel. I'm currently using version 2.6.31-gentoo-r6, with this config

Device Drivers
--> Multimedia Support
<*> Video For Linux
<*> DVB for Linux
[*] Video capture adapters -->
<*> Phillips SAA7134 support
<*> Phillips SAA7134 DMA audio support
<*> DVB/ATSC Support for saa7134 based TV cards

Compile, reboot, and done. TV card supported.

I emerged media-tv/mythtv-0.22_p23069 and followed the instructions on the MythTV wiki for configuring the backend.

When running mythtv-setup, if all you have is a keyboard, the left and right arrows allow you select options from combo boxes, the up and down arrows select combo boxes, buttons, text fields, etc and ENTER/RETURN selects whatever is highlighted. You'll get the hang of it.

When selecting your capture card, make sure that you select the DVB under the card type. In the case of my card, because I had multiple inputs (there's an S-Video adapter) I didn't change the card type, so I spent a lot of time figuring out what I couldn't scan for channels. The DVB "section" of your card is the antenna port.

Once you quit mythtv-setup and run mythfilldatabase, if you get an error about not being able to create a QObject/widget start your backend again. This is a weird error with not much information to go on, and took me a lot of googling to figure out. It's because the backend can't be connected to, so starting it solves the problem.

Once I got MythTV configured, it was time to check out how to integrate it with XBMC. The inbuilt stuff to connect to a MythTV backend is buggy, causes XBMC to lock up, and isn't very feature rich.

However MythBox is the saviour of this piece. Has everything the wife would want. So I'm testing the recording abilities now, and can watch live tv. Once I do some more tweaking I should have a decent PVR in the HTPC.

I really do need to get a new HD though.