tag:blogger.com,1999:blog-50485590837356947092024-03-14T02:58:29.403+11:00Thought Deprived MusingsRandom musings of topics by some bloke that does things.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.comBlogger54125tag:blogger.com,1999:blog-5048559083735694709.post-2809861889059172442014-02-17T14:17:00.000+11:002014-02-17T16:36:48.525+11:00To new or not to new?I've been getting into <a href="http://nodejs.org/">NodeJS</a> a lot lately as a lot of problems I've been solving are IO bound. Grab some data from different sources, do a little bit of processing, return to user. It's also been allowing me to work on my Javascript skills and slowly move towards some Functional Programming which is on my TODO list.
<p>
Call me old, boring, whatever, but I like patterns. When one has solved the same problem a millions times over why reinvent the wheel? So as I've been working with NodeJS, I've been thinking about how to perform some patterns in Javascript. I've been getting into <a href="http://www.typescriptlang.org">Typescript</a> a bit too for Domain Driven Design as I find working with a classical OOP paradigm is cleaner for modelling ones domain, and Typescript has some nice syntatic sugar.
<p>
As part of my adventures into Javascript I've read Crockford's <a href="http://shop.oreilly.com/product/9780596517748.do">Javascript: The Good Parts</a>, I'm in <a href="http://www.manning.com/cantelon/">NodeJS In Action</a> and I'm finding the blog over at <a href="http://howtonode.org">HowToNode</a> helpful with their practical "how tos".
<p>
One big area of confusion for me is the use of the <b>new</b> keyword when doing OOP in Javascript.
<p>
Crockford writes: "Functions that are intended to be used with the new prefix are called constructors. By convention, they are kept in variables with a capitalized name. If a constructor is called without the new prefix, very bad things can happen without a compile-time or runtime warning, so the capitalization convention is really important. Use of this style of constructor functions is not recommended. We will see better alternatives in the next chapter." He then writes about <a href="http://javascript.crockford.com/prototypal.html">Prototypal Inheritance</a>. A good explanation of the same concept, and how it relates to NodeJS <a href="http://howtonode.org/prototypical-inheritance">is over at the HowToNode site by Tim Caswell</a>.
<p>
Caswell echos Crockford's concerns: "I don't like the <b>new</b> keyword, it overloads the meaning of functions and is dangerous. If we were to say <i>frank = Person("Frank")</i>, then <i>this</i> inside the function would now be the global <i>this</i> object, not the new instance! The constructor would be overriding all sorts of global variables inadvertently. Also bad things happen if you return from a constructor function."
<p>
So one might think that <b>new</b> is a bad idea, and we should all use less dangerous (and possibly less confusing) syntax. Most of the alternatives use property copying, as seen in either Crockford's examples or Caswell's.
<p>
Then I read the following code snippet in "NodeJS In Action"
<pre>
var events = require('events');
var channel = new events.EventEmitter();
</pre>
<p>
If one also looks at the output of the <a href="http://www.typescriptlang.org/Playground/">Typescript Playground</a> for "simple inheritance" we see the usage of <b>new</b> keyword on the JS side (I can understand it's usage on the TS side as TS is mimicking classical inheritance).
<p>
Looking more into what the <b>new</b> keyword <a href="http://trephine.org/t/index.php?title=Understanding_the_JavaScript_new_keyword">actually does</a> we find it's actually doing is setting some properties and setting up the prototype chain.
<p>
So is the <b>new</b> keyword that bad, and should we be using it? I seem to be getting mixed messages, and I personally don't see the problem with it. If you're TDDing your code then you're going to pick up bugs related to forgetting to use the keyword. The alternatives seem to duplicate the behaviour of the keyword implementation, and you have to include the source for your particular alternative (for example Crockford and Caswell provide similar alteratives; which one do you include in your app?). NodeJS makes this a bit easier with it's <a href="http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor">Utils.inherits</a> method.
<p>
Does the <b>new</b> keyword make the code harder to understand because it obscures Javascripts prototypical nature and lures unsuspecting readers into troubles and snares? Crockford does write "JavaScript is conflicted about its prototypal nature. Its prototype mechanism is obscured by some complicated syntactic business that looks vaguely classical. Instead of having objects inherit directly from other objects, an unnecessary level of indirection is inserted such that objects are produced by constructor functions." Do the alternative property copying functions read better, and help make the intent of the program clearer? If you look at the Typescript Playground's <i>__extends</i> method I think not.
<p>
To new or not to new? Currently I'm leaning towards using the <b>new</b> keyword. I'm going to have to think on this some more, to develop a pattern for my own apps, but I'm open to suggestions/comments.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com2tag:blogger.com,1999:blog-5048559083735694709.post-76918441381879435852013-11-15T10:48:00.003+11:002013-11-15T10:49:20.529+11:00Using DeltaWalker with Mercurial for diffsI bought a bundle of apps with a deal, and got <a href="http://www.deltopia.com/compare-merge-sync/macosx/">DeltaWalker</a> as part of the bundle. It's a reasonably decent diff tool, with the ability to do folder merging as well (rsync anybody ;) ). In terms of interface it's not bad, nice layout, colours, etc. I haven't used it much because most of my diff work is handled by my JetBrains tools as I'm developing.
<p>
For my current project I'm doing a reasonably complicated merge between two feature branches to bring them into alignment. I'm not a fan of <a href="http://martinfowler.com/bliki/FeatureBranch.html">Feature Branches</a> as they impede Continuous Integration (<a href="http://martinfowler.com/bliki/FeatureToggle.html">FeatureToggles</a> are a better approach</>), and the fact that my merge is complicated is living proof as to why they should be avoided. However due to particular circumstances a Feature Branch was unavoidable.
<p>
With part of the codebase there's not a 1-1 mapping of files. For example a class that was an inner class in Branch A is now in it's own Java file in Branch B. So it turned out to be easier to see what changed in Branch A in a file and port the concepts/semantics of the changes across to Branch B instead of just relying on a syntactical/textual merge. This worked really well because even though the implementation had diverged quite significantly, the interfaces were reasonably unchanged so my Branch A Tests changes merged to Branch B very easily, which would help inform me if I stuffed up the merge of a file. Another win for TDD.
<p>
Mercurial's diff commands were powerful enough to show me the changes for a file between the <a href="http://thoughtdeprivedmusings.blogspot.com.au/2013/11/finding-last-merge-point-between.html">last merge point</a> from Branch A to Branch B to the head of Branch A. However since I like my graphical diff tools, I wanted to try out DeltaWalker to see how good it was.
<p>
DELTAWALKER DOCUMENTATION SUCKS - It doesn't actually tell you HOW to integrate with your SCM tools. This is the WORST form of marketing and almost made me toss the tool. Fortunately I was able to figure it out.
<p>
When you select Hg in the DeltaWalker and point it to your Hgrc file it updates the file for you. Would be nice if it explains that it does that! I found out about it because I version control my Hg config (with Mercurial of course), and running a diff on the file, for another update I made caused me to see the changes.
<pre>
[extdiff]
cmd.dw = /Applications/DeltaWalker.app/Contents/MacOS/hg
[merge-tools]
dw.executable = /Applications/DeltaWalker.app/Contents/MacOS/hg
dw.args = $local $other $base -merged=$output
</pre>
So one can then use Hg's <a href="http://mercurial.selenic.com/wiki/ExtdiffExtension">extdiff</a> functionality to load up the diffs.
<pre>
$ hg dw -r $lastmerge:$branch_head file
</pre>
It actually works well, but they guys behind the product need to update their docs!!Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-70104354341344196822013-11-15T10:26:00.000+11:002013-11-15T10:26:51.085+11:00Finding the last merge point between branches in MercurialWhen merging between my development and release branches, I like to know what the last merge point was between the two so that I can do tasks like compile Release Notes or see if some code needs to be merged (based on tags).
<p>
I developed a little Hg <a href="http://mercurial.selenic.com/wiki/AliasExtension">alias</a> that finds the last merge node between two branches.
<pre>
lastmerge = !hg log -r "children(ancestor($1, $2)) and merge() and ::$2"
$ hg lastmerge BRANCH_1 BRANCH_2
</pre>
In the example above, BRANCH_1 and BRANCH_2 are <a href="http://mercurial.selenic.com/wiki/Bookmarks/">bookmarks</a>. The alias makes use of <a href="http://www.selenic.com/hg/help/revsets">Hg Revsets</a> to find the latest merge node from BRANCH_1 into BRANCH_2Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-31042351578476512552013-09-04T23:47:00.001+10:002013-09-04T23:48:20.257+10:00Getting Symfony2 form names working with AngularJS expressionsThe default form field names generated by <a href="http://symfony.com/doc/2.2/book/forms.html">Symfony2's FormBuilder</a> appear to not work work with <a href="docs.angularjs.org/guide/forms">AngularJS' form validation</a> and directives like ng-show, unless one is proficient in Javascript, to deduce other syntatical options. The reason is that all the documentation examples for AngularJS expressions use Javascript's dot object notation, which can be confusing about what's really going on. A skilled JS reader might guess the answer to this issue before the end of this post. Judging by the amount of trouble people have had with this issue on various forums, the answer might not be so obvious. It had me stumped for a while.
<p>
So the conventional way to generate a HTML form via Symfony2 is:
<p>
In a Symfony Controller, generate the form:
<pre>
class MyController extends Controller {
/**
* @Route("/")
* @Method("GET")
* @Template
*/
public function indexAction() {
$myEntity = new MyEntity();
$form = $this->createForm(new MyEntityType(), $myEntity);
return array("form" => $form->createView());
}
}
</pre>
Using the following entity (Doctrine) class:
<pre>
class MyEntity {
...
/**
* @var string
*
* @Assert\NotBlank(message="Please provide a contact name")
* @ORM\Column(name="contactName", type="string", length=50)
*/
private $contactName;
}
</pre>
Using the following type for form building:
<pre>
class MyEntityType extends AbstractType {
...
public function getName() {
return 'myEntityType';
}
}
</pre>
Using the form.name variable in a template (ie: <form name="{{ form.name }}">) we get something like this (yes I'm using <a href="http://twitter.github.com/bootstrap/">Twitter Bootstrap</a> in my project):
<pre>
<div class="controls controls-row">
<input id="myEntityType_contactName" class="ng-pristine ng-invalid ng-invalid-required" type="text"
ng-model="myEntity.contactName" maxlength="50" required="required" name="myEntityType[contactName]">
<span ng-show="myEntityType.myEntityType[contactName].$error.required" style="display: none;">
</div>
</pre>
I wrote some <a href="http://symfony.com/doc/2.2/book/templating.html">Twig</a> code to add the ng directives to the generated HTML. Those blocks use the form values generated by the builder. For example, Symfony renders the <b>name</b> attribute on the <input> tag and my extension code uses the same name string as part of the <b>ng-show</b> generation.
<p>
If one enters/deletes text the <span> is not shown/hidden.
<p>
Doing some research, I found some StackOverflow posts <a href="http://stackoverflow.com/questions/8416783/symfony2-form-component-creating-fields-without-the-forms-name-in-the-name-att?lq=1">Symfony2 Form Component - creating fields without the forms name in the name attribute</a> and <a href="http://stackoverflow.com/questions/13384056/symfony2-1-using-form-with-method-get/13474522">Symfony2.1 using form with method GET</a>. If we follow their advice and change MyEntityType to have getName() return an empty string, the resulting HTML is:
<pre>
<div class="controls controls-row">
<input id="contactName" class="ng-pristine ng-invalid ng-invalid-required" type="text" ng-model="myEntity.contactName"
maxlength="50" required="required" name="contactName">
<span ng-show="formName.contactName.$error.required" style="">
</div>
</pre>
Of course I had to manually give the form a name in the template ie: <form name="formName">
<p>
As pointed out in the first StackOverflow post above; it's the formatting of the name in <i><a href="https://github.com/symfony/symfony/blob/2.2/src/Symfony/Component/Form/Extension/Core/Type/FormType.php">Symfony/Component/Form/Extension/Core/Type/FieldType.php</a></i> (or <i><a href="https://github.com/symfony/symfony/blob/2.3/src/Symfony/Component/Form/Extension/Core/Type/BaseType.php">Symfony/Component/Form/Extension/Core/Type/BaseType.php</a></i> if you're on Symfony 2.3) that's causing the problem.
<p>
I did not like those suggestions for a solution, but it did help me narrow down what was going on. I did not like the idea clobbering how Symfony generates forms, because we were having to adapt our server code (by altering the <i>getName()</i> method to deal with a client side technology issue. To me that seems wrong.
<p>
The reason that the field name was causing problems, is that AngularJS evaulates the string contents of the ng-show attribute as an <a href="http://docs.angularjs.org/guide/expression">expression</a>, and in ng expressions, <a href="http://blog.tomaka17.com/2012/12/random-tricks-when-using-angularjs/">you can use square brackets to create arrays/objects</a>. Consquently the square brackets that Symfony uses in the field name causes expession evaulation issues in AngularJS.
<p>
The solution is to use <a href="http://stackoverflow.com/questions/4968406/javascript-property-access-dot-notation-vs-brackets">Javascript's property style accessors for objects (square brackets) over the dot object notation.</a>. Thus we can have the square brackets in a string and there is no evalation issue.
<pre>
<div class="controls controls-row">
<input id="myEntityType_contactName" class="ng-pristine ng-invalid ng-invalid-required" type="text"
ng-model="myEntity.contactName" maxlength="50" required="required" name="myEntityType[contactName]">
<span ng-show="myEntityType['myEntityType[contactName]'].$error.required" style="display: none;">
</div>
</pre>
It all works. Just had to update my Twig code to generate the correct JS notation.
<p>
Alas, I can't take credit for the correct solution as someone else reported it as an <a href="https://github.com/angular/angular.js/issues/1201">issue against AngularJS</a>. I guess that we think about, and write our code in one particular style for so long, that we forget about alternatives. However this post helps pull together a variety of threads about this topic, so hopefully the next person who has the problem just has to read this blog post.
Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com1tag:blogger.com,1999:blog-5048559083735694709.post-86112528832661349962013-02-03T18:44:00.000+11:002013-02-03T18:44:18.667+11:00Remapping RaspBMC Remote KeybindingsI've updated my HTPC to be a <a href="http://www.raspberrypi.org/">Raspberry Pi</a> that uses <a href="http://www.raspbmc.com/">RaspBMC</a> as the OS. The really hard part was integrating my existing RAID with the device (over USB) to serve up my media; that's a subject for another post.
<p>
Something that didn't quite work out of the RaspBMC box (and to be fair A LOT did work) was the Context Menu for XBMC. I like to call it the "right click" option if I was using a mouse. This was because my remote didn't have the default button that XBMC was expecting. As mentioned <a href="http://thoughtdeprivedmusings.blogspot.com.au/2010/03/cat-devlirc0-brain-getting-lirc-working.html">in a previous post</a> I have a generic RC6 IR receiver that I had used the mceusb2 kernel driver. Looking at the remote config file for XBMC (/opt/xbmc-bcm/xbmc-bin/share/xbmc/system/keymaps/remote.xml) the ContextMenu is mapped to the <title> key. Looking in the LIRC config file for XBMC (/opt/xbmc-bcm/xbmc-bin/share/xbmc/system/Lircmap.xml) for a mceusb remote the "title" is mapped to the "Guide" key. Looking on my <a href="http://thoughtdeprivedmusings.blogspot.com.au/2010/04/writing-config-for-lirc.html">trusty remote diagram</a> there was a Guide button, but why didn't it bring up the context menu?
<p>
Turns out that RaspBMC was using a different remote config in <i>Lircmap.xml</i> Using <b>irw</b> to see the remote codes I get:
<pre>
16d 0 KEY_EPG devinput
16d 0 KEY_EPG_UP devinput
</pre>
Searching <i>Lircmap.xml</i> for <i>devinput</i> I got a different <remote> profile, with no mapping for <title> So I added:
<pre>
<title>KEY_EPG</title>
</pre>
and the Context Menu is displayed.
<p>
Handy little util <b>irw</b> for figuring out remote config.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com8tag:blogger.com,1999:blog-5048559083735694709.post-55620604875795120542012-12-05T21:27:00.000+11:002012-12-24T12:35:01.912+11:00Reflections on writing an Eclipse pluginFor a personal project I'm working on, the language choice is C++. Now that's probably going to make a lot of readers cringe because C++ has some bad rep, especially with the black magic voodoo that goes by the name of Templates. However due to the requirements of the app, C++ was a good fit. It's not too bad if you apply <a href="https://en.wikipedia.org/wiki/SOLID">SOLID</a> principles, design patterns, and above all <b>good naming conventions</b>. IMO a major contributor to C++'s bad reputation is the extremely poor names people use (the STL doesn't help either). We can see the porting back to C++ of good design concepts, most notably in the tools. Libraries like <a href="https://qt-project.org/">Qt</a> make it almost like having a JDK at ones fingertips (but with a decent UI system). Did you know you can do <a href="https://en.wikipedia.org/wiki/Test-driven_development">TDD</a> in C++?
<p>
I use the <a href="http://www.eclipse.org/cdt/">Eclipse CDT</a> for my IDE which when coupled with the Autotools plugin makes it really easy to edit and build my code with most of the features I get out of the Java tools in Eclipse. You don't get everything, but it does a good job.
<p>
One thing I was really missing was a way to generate test classes quickly. I use the <a href="https://code.google.com/p/googletest/">Google C++ testing framework</a> as it's the best I've found that matches JUnit or TestNG. It's a bit clunky given the state of Java test tooling, it's style is very much on par with JUnit 3, but given C++ doesn't have annotations, it does really well with some macro magic. One can still use <a href="https://en.wikipedia.org/wiki/Behavior-driven_development">BDD style</a> testing with a few compromises to get your code to compile. What I was missing was a "Create test class" option to keep my productivity from getting bogged down in making sure my #includes were correct.
<p>
I've been wanting to write an Eclipse plugin for a while, so I thought I'd use this itch as a chance. Turned out it was a lot <b>easier</b> than I had thought.
<p>
I downloaded the RCP version of Eclipse as it comes with all the development tools for making plugins. I personally like to keep my Eclipse installs separate for different purposes. About the only common plugin I use across all of them is my Mercurial plugin. I installed the CDT on top of the RCP so that I had access to all the CDT JARs since I was developing a CDT focused plugin.
<p>
Starting on writing the plugin was really simple. Of course I googled around first and came across a few helpful links:
<ul>
<li><a href="http://meri-stuff.blogspot.com.au/2012/04/writing-eclipse-plugins-tutorial-part-1.html">http://meri-stuff.blogspot.com.au/2012/04/writing-eclipse-plugins-tutorial-part-1.html</a></li>
<li><a href="https://cvalcarcel.wordpress.com/category/software-development/eclipse-development/">https://cvalcarcel.wordpress.com/category/software-development/eclipse-development/</a></li>
</ul>
The thing that made the plugin development easy IMHO was the <a href="http://www.java-forums.org/blogs/eclipse/attachments/2908d1328498308-start-writing-plug-eclipse-plugin-editor.png">plugin configuration editor</a>. I was dreading writing XML and getting it wrong (try debugging that), or writing properties to configure the build (and finding reference documentation to do what I want). Thankfully I didn't get any obscure errors, the editor did it all. Took the fear out of the project. Coupled with some extra helpful documentation on <a href="http://www.vogella.com/articles/EclipseExtensionPoint/article.html">plugin terminology</a> and what <a href="http://www.eclipse-tips.com/tips/3-top-10-mistakes-in-eclipse-plug-in-development">not to do</a>; I got my <a href="http://www.vogella.com/articles/EclipseWizards/article.html">first wizard up</a> and running very quickly.
<p>
Since I was making a CDT based plugin, I needed to set my <a href="http://eclipse-tips.com/tutorials/26-api-tooling-tutorial">API baseline</a>. This helps the compiler figure out if you've been doing naughty things. This is a real strength, because by letting you know when you've violated API boundaries, you're able to future proof your plugin against changes to internals; or at least tell (via your plugin conf) that you're willing to risk the consequences.
<p>
Since I was wanting to create a special instance of the "New Class" wizard option, I subclassed the relevant wizards and wizard pages classes. The one big frustration was that protected methods in these classes used private members, or private inner classes. Which meant that to override the behaviour (but still keep parent behaviour) some nasty reflection hacks were needed. I think the design moral there is that if you are going to allow your methods to be overridden make the data accessible (most likely through protected accessors). Either that or mark the methods as final so that the compiler lets you know that "you can't do that Dave". Unfortunately these CDT classes violate the whole <a href="https://en.wikipedia.org/wiki/Open/closed_principle">Open Closed</a> principle. Other than that it was actually pretty easy to debug my way through the "New Class" wizard to get an idea of how I could create a specialised class geared towards being a test class and writing the behaviour.
<p>
The bulk of the code was written on the train going to/from the <a href="http://www.yowconference.com.au/">YOW 2012 conference</a> so hopefully that conveys just how easy it was once I got going. It's a credit to the PDT guys that banging out a plugin is that easy; in terms of the infrastructure code required to hook it in; I mainly had to work on application logic which is how it should be.
<p>
The final result is the <a href="https://bitbucket.org/quasarprogramming/googletestingframeworkgenerator/wiki/README">Google Testing Framework Generator</a>, so if you can make use of it, please download it and make suggestions/patches. I have a few other extra ideas of code this plugin can generate, but for now I'm just going to TDD my way faster through some new features for my C++ project.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-60904549471253758502012-12-03T17:20:00.000+11:002012-12-04T09:58:25.797+11:00Spring 3 MVC naming gotchaAs the title suggests, I got bitten by a naming issue in Spring 3 (3.0.5) MVC. I wasted a fair amount of time on it, so I don't want to do it again. The relevant material pertaining to this issue may be in the <a href="http://static.springsource.org/spring/docs/3.0.5.RELEASE/reference/">Spring reference documentation</a> however hopefully this blog post will be more concise for anybody else who has had this problem.
<p>
In my MVC app I have a <i>ProductsSearcher</i> class that has two dependencies that are injected via the constructor. In my Spring config XML I have the following snippets:
<pre>
<context:component-scan base-package="org.myproject" />
<bean class="org.myproject.ProductsSearcher"> ... </bean>
</pre>
The <i>component-scan</i> tag is needed to get Spring to <a href="http://static.springsource.org/spring/docs/3.0.5.RELEASE/reference/beans.html#beans-classpath-scanning">process the @Controller annotation to get the bean registered as a MVC controller</a> However when boot strapping the application context the deployment failed
<pre>
ERROR [DispatcherServlet] Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.springframework.web.servlet.mvc.annotation.
DefaultAnnotationHandlerMapping#0': Initialization of bean
failed; nested exception is org.springframework.beans.
factory.BeanCreationException: Error creating bean with
name 'productsSearcher' ... Instantiation of bean failed;
nested exception is org.springframework.beans.
BeanInstantiationException: Could not instantiate bean class [org.
myproject.ProductsSearcher]: No default constructor
found; nested exception is java.lang.NoSuchMethodException: org.
myproject.ProductsSearcher.<init>()
</pre>
So Spring was trying to instantiate an instance of my class via reflection and borking because there was no default constructor on the class. Why would it do that when I have a bean definition in the config?
<p>
I wasn't willing to change the constructor because, good design dictates that if the class can't function without a dependency then the dependency <b>must</b> be injected at construction time; otherwise the object will be in an illegal state (not "fully formed").
<p>
I thought it might be something to do with the component scanning as component scanning also involves processing configuration annotations (a good discussion of the XML tags that deal with annotation processing <a href="http://stackoverflow.com/questions/7414794/difference-between-contextannotation-config-vs-contextcomponent-scan">can be found on Stack Overflow.</a>).
<p>
The problem lies in the name/id (or lack thereof) of the bean instance. Component scanning does involve Spring creating an instance of the stereotype (ie: my @Controller) but the instance was created with a given name. Given the good old Java Bean naming conventions the name is <i>productsSearcher</i>. By default whenever I create stereotyped bean config I give the bean this style of name. This time I forgot :( :(. Adding in the <i>id</i> fixed the deployment issue
<pre>
<bean id="productsSearcher" class="org.myproject.ProductsSearcher"> ... </bean>
... </bean>
</pre>
So my bean definition overrides the default and the correct bean definition is used and the bean is instantiated.
<p>
Given that most of us would name our beans using the Java Beans naming convention (since as Spring users it's been beaten into us) this problem had never occurred to me. If anybody has any insight into the workings of the Spring mechanisms behind component scanning please offer up your wisdom in the comments. For the rest of us, remember to name your stereotyped beans properly.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-4282321742694521182012-10-19T12:03:00.000+11:002012-12-04T09:59:09.378+11:00Don't nail your domain to your infrastructureThe client I'm currently working for is in the process of some major rework for a core application, which touches multiple points in the business. This of course means from a modelling perspective that different <a href="http://www.infoq.com/minibooks/domain-driven-design-quickly">Domain</a> projects are evolving with different Bounded Contexts and supplementary persistence and service projects to access/manipulate Domain resources. While this is a great effort to separate concerns by the people involved, there comes a time when all that domain work has to touch specific infrastructure resources - namely the dreaded database.
<p>
The problem arises when more than one application wants to use the common code but for different target environments. Application1 wants to target DatabaseA while Application2 needs to target DatabaseB. I'm all for having a default configuration of beans provided in the libraries to ease integration with applications with appropriate reference documentation to understand the bean configuration and how to replace it. IMHO this is something that helps make Spring stand tall in the field.
<p>
However when a default configuration gets injected with a resource bean (eg: a DataSource), for an application to use that configuration with a different resource (ie: a DataSource pointing to a different DB) then the entire configuration has to be copied and pasted into the application context and jiggled around so that the correct resource bean is injected at ApplicationContext boot time.
<p>
<b>Ugly and painful!</b>
<p>
What should happen is that a resource factory bean should be specified in domain libaries, with applications providing their own implementation.
<p>
This is of course another reason why <a href="http://thoughtdeprivedmusings.blogspot.com.au/2010/08/context-dependent-annotations.html">wiring annotations in code</a> is ultimately self defeating as this sort of architecting becomes <a href="http://thoughtdeprivedmusings.blogspot.com.au/2012/05/perhaps-spring-should-move-to-its-own.html">more difficult if not impossible to do.</a>
<p>
Fortunately we're Agile, so it shouldn't be too hard to clean up.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-9611969011507966822012-07-23T13:34:00.001+10:002012-12-04T09:59:26.015+11:00Making your architecture scream!<i><b>[Note: This is a repost of an entry I wrote for my company blog at http://www.intunity.com.au/2012/07/23/making-your-architecture-scream/]</b></i>
<p>
I've mentioned before that I'm a big fan of <a href="http://en.wikipedia.org/wiki/Robert_Cecil_Martin">Robert "Uncle Bob" Martin</a>. One of his latest projects is the Clean Coders "Code Cast" that I've been watching with some Intunity+Client colleagues on a client site. Uncle Bob does have his quirky moments; but it's great to see the discussions that the material brings about within the team I'm working in.
<p>
The latest episode <a href="http://www.cleancoders.com/codecast/clean-code-episode-7/show">on architecture</a> made me think of another project that I worked on where the project was so tightly coupled to the DB that it made it impossible to reuse the domain model (a mate of mine had a similar problem that I've <a href="http://thoughtdeprivedmusings.blogspot.com.au/2010/08/context-dependent-annotations.html">written about before</a>). This hampered development and lead to some ugly code.
<p>
UB's point in the episode was that the architecture should scream at you what it does. His example was of a Payroll system that should scream "accounting software" at the coders; not the technologies (eg: web, database, etc) used. Following on from that idea, my thoughts turned to the practise of <a href="http://domaindrivendesign.org/resources/what_is_ddd/">Domain Driven Design</a> where we want to place as much logic (or behaviour) into the domain model. After all it's the place of the Dog class to tell us how it <b>speaks()</b>. So that means you should develop your domain model first (that which screams at readers what the initial design is) and bolt on other features to meet requirements (with those requirements preferably defined with <a href="http://www.mountaingoatsoftware.com/topics/user-stories">User Stories</a> in my opinion). The core of the architecture is the model, but with the ability to evolve the model and the other features of the application. This is great for the business because it can get more features released! The model can be exposed/manipulated in a SOA by moving representations of the model (resource) around (ala REST) - or not. Developers haven't been bound to a particular technology which hampers their ability to write useful code for the business.
<p>
However there are some business decisions made that can cripple the ability of a team to achieve this outcome; where the architecture consequentially whimpers. Usually that revolves around the purchasing decisions made by the business. In UB's episode the DB was a "bolt on" to the architecture. The DB was used to store information that was given "life" in the domain model. It can be added at the <a href="http://www.codinghorror.com/blog/2006/10/the-last-responsible-moment.html">last responsible moment</a> to reduce risk to the business that their purchase will be in vain. The focus of the application was in the model, not the DB product. So what happens to your architecture (and all the benefits of a rich domain model) if your business engages a vendor to consult on your business' architecture who's business model is selling a product (or licenses for a product)?
<p>
Like UB, I like to see an architecture that screams at me what it's doing - that way I know that it's benefiting our clients, and that I can punch out feature, after feature, after feature.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-36200226347731825132012-06-25T16:07:00.000+10:002012-12-04T09:59:40.781+11:00Applying TDD to guitar amps<b><i>[Note: This is a repost of an entry I wrote for my company blog at http://www.intunity.com.au/2012/06/25/applying-tdd-to-guitar-amps/]</i>
</b><br /><br />
Here at Intunity we have a really big focus on quality. Making
robust software that is flexible enough to meet clients ever changing
needs (in a short amount of time) is hard. That’s why we have all those
<a href="http://en.wikipedia.org/wiki/Three-letter_acronym"">TLA</a>‘s
floating around and other Agile terms that make non Agile practitioners
wonder why we’re talking about Rugby all the time. However once good
engineering principles soak in you find that you tend to start applying
those principles to other areas of your life. For myself it was
applying <a href="http://en.wikipedia.org/wiki/Test-driven_development">TDD</a>
to another problem – I thought my guitar amp had broken.
Understandably that made me a really cranky product owner and I wanted
it fixed yesterday!<br /></br />
If you don’t know how guitar amps work, there’s a few major
sections. There’s the input, the pre-amp, the effects loop, the
power-amp and finally the cab (speakers). They’re all connected in the
order listed. Like hunting for a bug in a multi layered application I
could have dived in and started randomly probing (or breaking out the
soldering iron), but just as in software I would have probably wasted a
lot of time. So I “wrote a test”. I plugged in my trusty electric,
tapped my line out jack (the signal is the same that is fed into the
power-amp) and played something that I knew would expose the problem.<br /></br />
The result – it’s all good.<br /></br />
I didn’t take it as a failure though as I’d halved my search area all
with one test and ~1 minute of time. Now the power-amp is a dangerous
section (if you’re not careful you can get electrocuted) so before I put
my affairs in order, I ran another test by plugging the amp into cab
and repeating the same input. The problem remanifested itself.
Jiggling the cable around to test for (and eliminate) a dodgy cable, I
found that the cable was loose in cab socket. Opened the back of the
cab and bent one of the metal contacts back so that it touched the cable
better and problem was fixed – with no electrocution risk.<br /></br />
All up 3 tests and 10 minutes.<br /></br />
To be honest the story’s pretty boring (unless you’re a guitar gear
nerd) but it highlights the value of TDD in that it changes how you
think about problems (and solve them). By testing you’re understanding
(or defining) your system. You’re learning about the problem domain and
setting up conditions so that you can prove a problem (if you’re bug
hunting) to then fix it. It’s also pretty obvious when you’re dealing
with a piece of physical equipment that will cost you money if you break
it; that you might want to be careful and not rush in. The result
ironically is that by being “slower” you’re “faster” because you’re
proceeding with knowledge.<br /></br />
One’s software (or client’s software) is the same. By practising TDD
you have less waste, faster code, better understanding and that end
product that all developers crave – a working piece of software.<br /></br />
By applying TDD to a more physical problem it only helped me see the
more of great advantage in the practise. After all I do like saving
time and money and our clients do too :DKieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-27404760246030140102012-05-31T11:19:00.000+10:002012-05-31T11:19:52.415+10:00Ubuntu continues to impress meThe story so far is that my wife's been needing a new laptop. We've slowly been watching her Dell (which was an el cheapo she got ~6 years ago) die. A USB port here, the wireless doing random things there. We've been impressed at how long it held out, but I'm <strike>cheap</strike> a bargain hunter, and wanted to get something at a good price that's also Linux compatible. Here in Oz Lenovo's having a <a href="http://shopap.lenovo.com/au/en/deals-and-coupons/outlet-clearance-laptops/">massive clearance sale</a>. I'd been contemplating a Lenovo because they are pretty rock solid.
<p>
This is where the story gets better. I'm sure that every Linux user has had difficulty with hardware drivers. Either the company doesn't make them (eg: some Dell Printers), the drivers are buggy (and being closed source you can't fix it - or google for someone that has), and you just end up pulling out your hair.
<p>
<a href="http://www.canonical.com/">Canonical</a> as part of the efforts to reach out to the business world have been certifying hardware configurations. They've been in <a href="http://blog.canonical.com/2011/05/09/canonical-and-lenovo-collaboration/">cahoots with Lenovo</a> to certify laptop models, and low and behold the model I was interested in is <a href="http://www.ubuntu.com/certification/hardware/201103-7447/">100% Ubuntu compliant</a>.
<p>
So I picked up a ThinkPad Edge E520 1143AJ1. Nice i5 processor, integrated graphics (for the occasional game of StarCraft that she likes to play), anti-glare screen (which I had to pay extra for on my MacBook Pro cause Apple are jerks).
<p>
It's perhaps not the most stylish of machines, but we can live with that. I was surprised by the weight. It feels lighter than it's actual stated weight, but that's a win I think. The first thing I noticed when booting it up was how <b>SLOW</b> it was!!! This machine is meant to be snappy Lenovo, but you weighted it down with a tonne of bricks. It didn't come with recovery disks, but a recovery partition. Not wanting to lose the 15GB, I burned that to a series of DVDs, then booted up Ubuntu 12.04.
<p>
I'm not sure how you can make an installer better over time (I was impressed with earlier versions), but the one thing I noticed especially was the installation of packages onto the system while I was filling out configuration dialogs. Multitasking to the max. The partioning dialog has received some polish since I last did a Ubuntu install so I was able to carve up the HD properly. As a note the <a href="https://help.ubuntu.com/">Ubuntu documentation</a> has also been improved, so I was able to quickly find the recommended partition sizes and adjust accordingly to need.
<p>
Out popped a new computer! Booted it up and everything worked (not surprising really). Configuring system settings was a breeze, and I've noticed some UI similarities to OSX which I don't mind but I wonder how the lawyers feel about that. The boot time was so quick (even for a non SSD) that I don't think I ever want to see Windows 7 again.
<p>
So far all is well, but I really wanted to commend Canonical for the constant innovations they're making in Linux/user integration. I'm a Gentoo hacker (when I can these days) and love to play around with config to get an optimal system. But my wife is your "typical" user. She's not going to be rendering video; where some additional CPU flags to ffmpeg during compile can make a significant difference to transcoding time. She's not going to be compiling code, or any other task that requires significant CPU resources. It's email, web browsing, office documents - the usual suspects. I expect the most significant thing the CPU will do is Javascript processing, or the odd game here and there. So having a Linux distro that's easy to obtain, installs quickly and just <b>works</b> out of the box is just so <b>awesome</b>, and inspiring for the future of computing. Hopefully Ubuntu can continue to make serious inroads into communities and thus convert more people to the joys of Linux.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-66740146149512881012012-05-16T12:07:00.000+10:002012-05-16T12:07:30.707+10:00Perhaps Spring should move to its own DSLI've been on and off musing about the differences between <a href="http://www.springsource.org/">Spring's</a> XML configuration and its Java annotations. I've debated this issue with colleagues, and the only answer they're able to give me (reading between the lines) as to why one should use annotations over the XML config boils down to "I don't like XML".
<p>
I have to agree somewhat. However XML is a fact of programming life, and while it shouldn't be abused as a configuration language (Spring, JEE, etc), there's sufficient IDE/editor support to make using XML not that painful. You should use the XML config for production code over annotations.
<p>
I've <b>never</b> worked on a project where using a logical layout of XML config to declare beans and other services is not understandable quickly (as well as being easily updatable). Contrasted with configuration in code where classes are annotated (which is not the same as a bean def), and where "auto black magic" is applied has lead me to spend a long time digging through code searching for the magic wand.
<p>
I've been writing my own DSL for a personal project using <a href="www.antlr.org">Antlr</a> and thus have been influenced by <a href="http://en.wikipedia.org/wiki/Terence_Parr">Parr's</a> philosophy that humans shouldn't have to grok XML. I'm not as hardcore as Parr, but I understand the idea. Spring's XML config is fantastic, but should it be written in XML anymore? We've come a long way in tools, Antlr is ubiquitous in the Java world. There's no reason why SpringSource couldn't publish the grammar to allow third party tools to be written to process the Spring DSL. Using tools like <a href="http://www.eclipse.org/Xtext/">Xtext</a> editors could be knocked up to provide the same feature set that the Spring IDE tools provide for editing the XML config (I quite like the autocomplete feature when specifying the class attribute for a bean tag). It would also end the war between XML haters and those who see the value in the text based config. "I hate XML" would no longer be an acceptable answer.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-87531254215371857032012-01-12T17:00:00.004+11:002012-01-13T09:58:56.672+11:00Flipping out over flipping the bitI like <a href="http://www.8thlight.com/our-team/robert-martin">Uncle Bob Martin</a>. We're nearly finished his book <a href="http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882">Clean Code</a> in our work study group and I only disagree with ~5% of what he says. ;)<br /><br />However he's flipped out over <a href="http://blog.8thlight.com/uncle-bob/2012/01/11/Flipping-the-Bit.html">Flipping the Bit</a>. Referencing an article by <a href="http://www.simple-talk.com/dotnet/.net-framework/unit-testing-myths-and-practices/">Tim Fischer</a>, UB has decided that because Fischer calls into question the value of doing Unit Tests 100% of the time, Fischer doesn't value testing (I think he does, he just has bad design which makes his testing life harder).<br /><br />Unit Tests don't obviously equal <a href="https://en.wikipedia.org/wiki/Test-driven_development">TDD</a>, because the T of course stands for <b>T</b>ests, but as we know, there are many levels of testing. Unit, integration, end to end, etc. I'm all for TDD, quite strongly in fact that I've occasionally bordered on being a zealot. Here I totally agree with UB's points about testing (TDD to be specific) as bringing higher quality, "cheaper" code into existence. Fischer has it <b>totally wrong</b> that "[unit] tests are little-used for the development of enterprise applications." In my organisation we write Unit Tests all the time (as part of TDD), and they provide a high degree of feedback and value to the project. His point about the cost (purely monetary) of writing Unit Tests is true from a mathematical perspective, however it's a cost worth paying.<br /><br />Point (1) of UB's list is totally justified in being there. Reading Fischer's post on can easily think that he hasn't grasped the point of TDD because his examples talk about writing the tests after the implementation. UB is right to smack Fisher on the nose about this one.<br /><br />Sadly in both these posts there's kernels of truth woven in there, and I think UB missed the nugget in Fischer's post which leads to UB's second (erroneous) point:<br /><br /><i>Unit tests don’t find all bugs because many bugs are integration bugs, not bugs in the unit-tested components.</i><br /><br />Why is he wrong? Because Unit Tests != TDD. The jump there was astonishing to my mind. Superman couldn't have jumped that gap better! We do have to justify the existence of test code - but to ourselves not to higher ups or the Unit Test Compliance Squads. What value are these tests adding? How are they proving the correctness of my program and creating/improving my design/architecture?<br /><br />If you're writing an Adapter (from the <a href="http://www.growing-object-oriented-software.com/">Growing Object Oriented Software Guided By Tests</a> book) then Unit Tests add little value to ensuring that the Adapter works correctly because the Adapter is so tightly coupled to the Adaptee that you'd have to essentially replicate the Adaptee in fakes and stubs. Here any bugs that happen in the Adapter will probably not show up in Unit Tests, because those bugs are signs that the developer probably misunderstood the behaviour of the Adaptee for a particular scenario, and therefore would have coded the fake/stub to be incorrect. You've got a broken stub, an incorrect test, but a green light.<br /><br />An example is an DAO. It is designed to abstract away access to the DB and is tightly coupled to the underlying DB technology (JPA, JDBC, etc). You don't want to Unit Test that. Integration tests add far more value/feedback with less code to maintain. Add in an inmemory DB and you've got easy, fastish tests that have found bugs in my code far too many times than I'd like. Unit Tests at the Adapter level have only in the end been deleted from my teams codebase because they take time (therefore money) to maintain, replicate the testing logic of the Integration Tests and give little feedback about what's going on down there. That's in line with Fischer's gripes. The costs of the tests outweigh the benefits.<br /><br />Where Fischer goes seriously wrong is that he doesn't add in <b>all</b> forms of testing into his money calculations, and doesn't realise that if you don't do TDD properly (where Unit Tests do play an integral part) you'll spend more money.<br /><br />His pretty picture is flawed in that SomeMethod() is business logic (a Port) that uses data from several sources. However a Port <b>should never</b> get the data directly; it <b>should always</b> go via an Adapter ("Tell don't ask", SOLID, etc all show how good design ends up with this result). Hence SomeMethod() can be Unit Tested to the Nth degree covering every scenario conceivable because the Adapters can be mocked (which we own and understand hopefully), while the Adapters are Integration Tested. Other wise the amount of code required to setup what is essentially a Unit Test (because we're focused on the SomeMethod() unit) for every scenario for SomeMethod() becomes prohibitive. Developers being developers will slack off and not write them. If they do, the bean counters will get upset because the cost of developing/maintaining the tests increases as the tests are brittle. If there is a bug where is it located? SomeMethod(), the third party "systems", the conduits inbetween? So you spend more time and money tracking down a problem.<br /><br />This is where Fischer throws the baby out with the bathwater. He has bad design.<br /><br />I'm surprised the Uncle Bob didn't pick up on this, and instead focused (rightly) on Fischer's points about cost side of not writing Unit Tests, which devolved (wrongly) into a rant about not writing tests at all.<br /><br />TDD is the way to go (one should flip the bit for that), but Unit Tests are not always beneficial (eg: for Adapters) and can bring little ROI and instead the Integration Tests should be written first, with the Adapter being implemented to pass those tests. Having said that if you're throwing Unit Tests out all together you've got a seriously flawed design.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com2tag:blogger.com,1999:blog-5048559083735694709.post-4585049797131977522011-11-29T11:38:00.003+11:002011-11-29T11:55:03.413+11:00Eclipse moving forwardsMost Java devs (and others too) have a love/hate relationship with <a href="http://www.eclipse.org">Eclipse</a>. Many a <s>flame war</s> debate has been had on the subject.<br /><br />From my own personal experience, I think Eclipse is moving forwards in the right direction. Helios and Indigo both feel snappier, and the installation/upgrading of plugins is easier.<br /><br />I had to upgrade my work instance of Eclipse and even though there were conflicting versions of different plugins, it was easier to resolve than in previous versions of Eclipse.<br /><br />Some of the other language editors could do with some love (PHP for example) but overall I'm finding my productivity is increasing with the later versions of Eclipse and I have to wrestle with it less.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-31617729026898313192011-06-22T20:06:00.004+10:002011-06-22T20:38:08.875+10:00Server configuration with MercurialI've been playing a lot lately with <a href="http://mercurial.selenic.com/">Mercurial</a>, and in my opinion it's the best SCM around. I also have been administering some of my servers (actually <a href="http://www.rackspace.com/cloud/">Rackspace VMs</a>) and ran into the age old problem that sys admins have had; keeping the server config synchronised.<br /><br />The problem in a nutshell is that you install a set of applications (through yum or apt-get or whatever) and configure them. However you run into the problem of config propagation, versioning/history, rolling back to a known configuration, etc. I've seen a few sys admins roll their own solution, usually involving rsync and a lot of logging.<br /><br />My solution was to use Hg to do all the heavy lifting, with a wrapper bash script that I knocked up very quickly that invokes Hg to version configuration files. It maintains knowledge about where the file came from by copying the file to be relative to the repository directory. For example if a user was to edit <i>/etc/hosts</i> the file will reside in the repository at <i>$REPOS_HOME/etc/hosts</i><br /><br />How it works is that you run <pre>$ editconf <file></pre> the script does the following.<br /><ol><br /> <li>Resolves the absolute path of a file (using a realpath bash script that a friend knocked up)</li><br /> <li>Checks if the file exists in the repository</li><br /> <ol><br /> <li>If the file doesn't exist the file is copied and added to the repository</li><br /> </ol><br /> <li>Drops through to the users editor (defaults to vim)</li><br /> <li>Copies the file to the repository when the user exits the editor</li><br /> <li>Attempts to commit the file</li><br /></ol><br />If the user is created for the first time (ommitting the initial check), the script is smart enough to add it first. Mistakes are fixed by leaving an empty commit message (most SCMs wont commit of course), and reverting the file.<br /><br />Branches can be made, merged, and state can be pushed around various servers with very little effort.<br /><br />I mentioned this approach to some people and they thought it was a nifty idea and asked me to share my scripts. They can be found on <a href="https://bitbucket.org/">BitBucket</a> under the nesupport[1] <a href="https://bitbucket.org/nesupport/systools/overview">SysTools project</a>. They're licensed under the <a href="https://www.mozilla.org/MPL/">MPL</a>, and since they were knocked up in a hurry patches/feedback are always welcome. Further instructions are found in the scripts themselves (if further action is required).<br /><br />Further work could include the automated sharing of repository state (cron job) and synchronising what's in the repo with what's on the filesystem.<br /><br />[1] The code was developed for a project I'm working on with somebody else who agreed to open source our sys admin scripts, of which editconf is a part.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-11148110138023280512011-05-30T13:51:00.005+10:002011-07-17T14:47:47.498+10:00Integration is the mess that noone wants to clean upThe title for this post came as a rather tongue in cheek comment by the sys admin at my workplace. However I think this is a rather accurate description of a personal project that I've been working on for the past six months. This project was designed as an educational project into different technologies that I've wanted to play with; so a lot of the decisions were guided by that. Across the way I've thought about different issues, found different tips and tricks and thought I'd share them.<br /><br />The goal of this project is an integration piece between a website (through RSS) and social media sites like Facebook, MySpace, and Twitter (with all the links fed back to the original site). The goal was to not have to replicate data across the multiple sites; as well as give me an excuse to play with new toys.<br /><br />The runtime environment is <a href="https://code.google.com/appengine/">Google App Engine</a> as this app would run infrequently and GAE is free. I also have wanted to write a GAE app for a while, so this seemed like a good idea. The language is Java since that's my primary development language and I don't know Python.<br /><br />The build tool is <a href="http://ant.apache.org/">Ant</a> as I despise Maven. I use Maven at work, and I find it to be inflexible, bloated and just plain annoying. It does have some good ideas however and given Ant's import abilities, one can write generic build tasks that implement good build practices without all the pain. I did consider other build tools like <a href="http://www.gradle.org/">Gradle</a> but I found personally that after thinking about what I wanted to do for a while I was able code the build declaratively with Ant being smart enough to handle all the heavy lifting. I'm considering Open Sourcing my Ant build library that I've accumulated when I've cleaned it up a little bit.<br /><br />Maven also provides dependency management however I personally find <a href="http://ant.apache.org/ivy/">Ivy's</a> dependency management to be more mature, cleaner, simplier and easier to configure/use than Maven.<br /><br />Discussions about build tools can often get people a bit hot under the collar and I found <a href="http://javaposse.com/java-posse-341-roundup-11-build-tools">The Java Posse</a> podcast on the subject to be very educational (as a hint they're not very Maven friendly).<br /><br />My IDE of choice is Eclipse (Helios), with the relevant Google plugins. One other reason that I dislike Maven is that the <a href="http://eclipse.org/m2e/">M2</a> integration sucks (I've heard it's a lot better with Indigo however I've yet to try). However Ant was not immune from problems either in that the runtime I am using is 1.8.1, and the Ant build editor doesn't recognise syntax from that version <a href="http://stackoverflow.com/questions/3519141/eclipse-helios-ant-editor-giving-errors-warnings-with-ant-1-8-1">so Eclipse tells me my build.xml has errors</a> in it. Runs fine however.<br /><br />For my SCM I'm using <a href="http://mercurial.selenic.com/">Mercurial</a> as it is the best VCS around, beating Git by a gazillion miles (yes I have actually used Git in a sophisticated manner and Hg is so much easier and more intuitive).<br /><br />The app itself is broken down into 3 parts (with Spring handling all the configuration). The first handles datastore/RPC services, with a GWT frontend, Spring MVC providing the CRUD RPC endpoints, and <a href="https://code.google.com/p/objectify-appengine/">Objectify</a> handling persistence. The second part is implementing the manual workflow of OAuth2 with Spring MVC rendering the views (very simple JSPs) and accepting callbacks. The third was the part that actually made posts to the social media sites and kept everything in sync.<br /><br />I spent a lot of time trying to bash JPA to work on GAE, however the DataNucleus implementation is quirky at best. It does certain tasks (like the assigning of PK values) differently from all other JPA implementations and the JDO junk that gets stuck in you .class file can mess with other annotations or class behaviour. I spent one Saturday ripping out JPA and replacing it with Objectify (guided by tests for you TDD purists out there), and I've little trouble ever since.<br /><br />A few big design principles that I've been trying to hammer into myself is good old <a href="http://en.wikipedia.org/wiki/Don%27t_repeat_yourself">DRY</a> and others from the <a href="https://secure.wikimedia.org/wikipedia/en/wiki/SOLID">SOLID</a> acronym like <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Single_responsibility_principle">SRP</a>; guided by tests. Working with a framework like Spring is that if you don't follow these ideas then you can get yourself into trouble quickly enough to realise something's amiss. For example even when using Objectify one still has to deal with transaction management. It's the same bit of code that runs for all DAOs so where do you put it? One should of course favour <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Composition_over_inheritance">composition over inheritance</a>. I went with an AOP (JDK proxy) approach where a transaction manager provides advice around the DAO methods, injecting an "entity manager" which the DAOs then used for DS operations. Very elegant but not as easy if one doesn't code to interfaces (the L of SOLID as I understand it).<br /><br />However GAE itself doen't help with the testing side of these principles, and it's an area where Google could really improve their tooling, especially if they want more than Micky Mouse projects running on GAE. It's nigh impossible to preload the local DS file with test data for integration/acceptance tests (where the tests run in one JVM and the dev_server serving your app runs in another). <a href="http://thoughtdeprivedmusings.blogspot.com/2011/02/how-to-prepopulate-your-gae-devserver.html">I hacked a solution together but it broke when the SDK was updated</a>. The only way therefore is to use an interface which you build into your app. Since most of my entities were being served in a XML representation over HTTP to be consumed by a GWT front end it wasn't too much of a pain to drive my tests through the browser using Selenium. However for a more sophisticated project it's a nail in the coffin for testability.<br /><br />Learning Spring MVC was a joy, with the ability to render a view of the model (through JSP) or return data in a HTTP response body (ala REST/RPC) with minimal effort. As far as MVC frameworks go it's the best I've worked with to date. However there is some room for improvement with the way that Content-Types (MIME) are handled in an AJAX world, but <a href="http://thoughtdeprivedmusings.blogspot.com/2011/03/ff-obeys-rules-and-spring-3-chokes.html">I've detailed that problem before</a><br /><br />Running code on GAE of course other than native servlets is annoying, which some readers may already know because of the way GAE attempts to serialize everything. I had to rework my atchitecture a bit to try and get around that, but I still get errors in the logs due to Spring classes not being Serializable.<br /><br />I've probably left a few details out here and there, and questions/comments are welcome (unless you want to troll then bugger off). Constructive criticisms of technology choices are always interesting and I like to chig wag about that; although please don't start a flame war.<br /><br />Overall I found this project to be satisfying and educational and I've come out the other side a better engineer/developer.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-4116853211892764872011-05-16T10:57:00.003+10:002011-05-16T11:50:52.335+10:00Filling a table in Jasper ReportsYou wouldn't think it, but figuring out how to fill a table with data in Jasper Reports (JR) was actually more difficult than it sounds. Poor documentation, bad/incorrect/plain stupid examples and forum posts that have been left for years with no answer! Due to library constraints on this project, this example is with Jasper Report/iReport 3.4.7 and YMMV with other versions.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFLeVuM4wAOWZ-7k2bo60RjUzVBr5wHIXhOrgBLZWkMYhw9KJAT5hNKgmHaQNH-jpBf3WT-HstWW_q60uSq2mq_6D8zs6OYYPhB6cQnmstnuvGvFUlnFGkD9uZx067WfirCAuSQWD2Ow/s1600/jasper-reports-table-example.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 207px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFLeVuM4wAOWZ-7k2bo60RjUzVBr5wHIXhOrgBLZWkMYhw9KJAT5hNKgmHaQNH-jpBf3WT-HstWW_q60uSq2mq_6D8zs6OYYPhB6cQnmstnuvGvFUlnFGkD9uZx067WfirCAuSQWD2Ow/s400/jasper-reports-table-example.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5607116118237695234"><center>Pictorial example of a table in a JR report</center></img></a><br /><br />Say that you're producing a report with a table of Customers that is embedded within a report with other data (see image above). JR treats the table as a subreport (but with different XML tags) which means that data you fill the parent/master report with <b>isn't</b> instantly available to the table. This is a caveat that isn't intuitive to find out/understand until you find out that the table is a subreport. To make matters worse iReport assumes that you're getting your data either from a straight JDBC connection, or populates your table's <i><dataSourceExpression></i> with a <b>JREmptyDataSource</b> which will populate your table fields with null.<br /><br />^How helpful^.<br /><br />If you're in any sort of Enterprise system you'll no doubt have DAOs, and different models (domain, DTOs, etc) to feed into your reporting code, so you'll need to strip out the empty data source iReport sticks in your template.<br /><br />Fortunately there is a <b>JRBeanCollectionDataSource</b> class that maps field names in the report template to properties in the data using the Java Beans naming convention. The last step is actually make your data available to the table. This is a combination of fixing the template and writing a reporting DTO class.<br /><br />Firstly a field in the report will need to be a Java Collection type. I didn't have much success with non JSE collections, and it's better to code to interfaces anyway.<pre><field name="customers" class="java.util.List"/></pre>The DTO will need to provide an instance of that collection type with a getter that will match the name of the field in the template. Using the example in the image<pre>public class CustomerList {<br /> public List<Customer> getCustomers() { ... }<br />}</pre>Then for each field placeholder in the table, if it maps to a property getter in the Customer object then that property value will get substituted into the report.<br /><br />The final configuration is to tell the table to source its data from the collection.<pre><dataSourceExpression><![CDATA[new net.sf.jasperreports.engine.<br />data.JRBeanCollectionDataSource($F{customers})]]><br /></dataSourceExpression></pre>Then when the report is filled with an instance of <b>CustomerList</b> that you've prepared earlier, the report engine will iterate over the Collection and fill each row of the table.<br /><br />Once you've done some digging/filtering and realised how JR does it's tables it's actually pretty easy/plain obvious. However because of the afore mentioned reasons (incorrect examples sending one down the garden path) it can be time consuming and frustrating. Given that a table is a core requirement of most reports wanted by a business it would make sense to me to make putting a table in a report to be so dead simple.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com8tag:blogger.com,1999:blog-5048559083735694709.post-67942298204806551422011-04-15T15:16:00.003+10:002011-04-15T15:49:14.646+10:00Intuitive frameworks is how it should beOn my current project, the server has to have the ability to receive image data embedded in a JSON object, therefore the JSON string representing the data is a <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Base64">base64 encoding</a> of the binary image. The entity model (that is persisted to the DB through JPA/Hibernate) has the image data field of type <b>byte[]</b>.<br /><br />Turns out that <a href="www.jboss.org/resteasy">JBoss' RESTEasy</a> is smart enough to use <a href="http://jackson.codehaus.org/">Jackson's</a> ability to decode base64 text based on the type of the destination field.<br /><br />That intuitive step by the framework is what helps give me a good feeling inside knowing I don't have to deal with type conversions, just what I want to do with the data.<br /><br />Next task off the board please ....Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-57616635720989947902011-03-28T09:55:00.002+11:002011-03-28T10:45:21.751+11:00FF obeys the rules and Spring 3 chokesIt's really quite sad/annoying when a toolkit as sophisticated as Spring chokes on simple matters.<br /><br />I'm writing a GWT client to make RPC (XML/HTTP) calls to the server which is implemented in Spring MVC. My Controller has a @RequestBody on the input type which is a class that is annotated with JAXB annotations to serialise/deserialise from XML. Therefore my Controller is at the mercy of the <i>HttpMessageConverter</i> that Spring uses to rip the data out of the HTTP request, turn it into a POJO and give it to my Controller.<br /><br />In the Spring config (I use the XML config), the <i><mvc:annotation-driven/></i> by default registers an instance of <i>AnnotationMethodHandlerAdapter</i> which (among other duties) is <a href="http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/mvc.html#mvc-config">responsible for determining which message converter to use</a>. What is nice is that a JAXB converter is registered automatically if the JAXB libs are on the classpath. The converter that is chosen is the one that can process the MIME type or Content-Type that is in the HTTP request header. In this case it would be <b>application/xml</b> which is processed by the default <i>MarshallingHttpMessageConverter</i> which in turn delegates the real work to JAXB.<br /><br />However Firefox (FF) obeys the <a href="http://www.w3.org/TR/XMLHttpRequest/">rules</a> regarding XHR requests, in that it appends to the Content-Type header a charset. So <b>application/xml</b> becomes <b>application/xml; charset=UTF-8</b>. Because of this the entire server side code unravels because the Content-Type is not smart enough to parse the charset out of the Content-Type header; and throws an exception that the Content-Type is not recognised.<br /><br />The solution after much reading up on the internals of the Spring MVC framework is to create my own bean tree where the supported type contains the charset. The message converters support changing the supported MIME types, which are modelled by the class <i>MediaType</i>. <i>MediaType</i> supports a Charset in its constructor. Therefore I end up with the following config.<pre><br /><!-- Override default AnnotationMethodHandlerAdapter that<br /> mvc:annotation-driven provides --><br /><bean class="org.springframework.web.servlet.mvc.<br /> annotation.AnnotationMethodHandlerAdapter"><br /> <property name="messageConverters"><br /> <list><br /> <ref bean="stringHttpMessageConverter" /><br /> <ref bean="marshallingHttpMessageConverter"/><br /> </list><br /> </property><br /></bean><br /> <br /><bean id="marshallingHttpMessageConverter" <br /> class="org.springframework.http.converter.xml.<br /> MarshallingHttpMessageConverter"><br /> <property name="marshaller" ref="jaxbMarshaller" /><br /> <property name="unmarshaller" ref="jaxbMarshaller" /><br /> <property name="supportedMediaTypes"><br /> <list><br /> <!-- This handles browsers like FF who add<br /> the charset to the XHR request. --><br /> <bean class="org.springframework.http.MediaType"><br /> <constructor-arg index="0" value="application"/><br /> <constructor-arg index="1" value="xml"/><br /> <constructor-arg index="2" value="UTF-8"/><br /> </bean><br /> </list><br /> </property><br /></bean><br /> <br /><bean id="stringHttpMessageConverter" <br /> class="org.springframework.http.converter.<br /> StringHttpMessageConverter"/><br /> <br /><a href="http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/oxm.html"><oxm:jaxb2-marshaller id="jaxbMarshaller" <br />contextPath="org.altaregomelb.sync.domain"/></a></pre><br />Given the framework for converting data from HTTP requests already contains the logic to parse out Content-Type from the header, it's a shame that the default beans that are created by the <i><mvc:annotation-driven/></i> don't parse the charset properly instead of throwing an exception, given that it is part of the standard to include a charset in XHR requests. All XML/HTTP requests wont be XHR requests it's true, but given the prevalence of AJAX apps out there; the framework should account for it.<br /><br /><b>References</b><br /><a href="http://static.springsource.org/spring/docs/3.0.x/javadoc-api/">Spring 3 API</a>Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-73812058163634972832011-02-24T16:30:00.002+11:002011-02-24T16:33:25.835+11:00Technological justiceIn The Age today, it's <a href="http://www.theage.com.au/technology/technology-news/iinet-again-slays-hollywood-in-landmark-piracy-case-20110224-1b6a1.html">reported</a> that the Federal court backed iiNet. Finally some sane and reasonable justice from the courts in technological matters.<br /><br />Well done iiNet.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-16949242450944428072011-02-02T22:34:00.003+11:002011-02-02T22:48:16.749+11:00How to prepopulate your GAE dev_server for testingThis post assumes that you know how the Google App Engine <a href="http://code.google.com/appengine/docs/java/datastore/">Datastore</a> basically works, and how to perform <a href="http://code.google.com/appengine/docs/java/tools/localunittesting.html">local unit testing</a> of your code on the dev_server provided in the GAE SDK.<br /><br />There is a massive gaping hole in the GAE SDK, both in terms of functionality and documentation (of course if there is some documentation on the matter please post it in the comments) in regards to the population of your local datastore (which is persisted to a file) for testing. Why does this matter? For integration, or acceptance testing. Not all testing my Google Overlords wants to be done in a memory only datastore, or within the one process/JVM/however Eclipse runs my dev_server and my JUnit tests.<br /><br />Even if you setup your test code to point to the same file that your dev_server is reading from, your application wont see your entities. To say it's a frustrating problem is an understatement. Turns out there is a combination of fields that have to be set in your test code for the underlying datastore code to populate the file in such a way to get this to work<br /><br />Rather than repeat the solution, it can be <a href="http://stackoverflow.com/questions/4767918/acceptance-testing-preloading-of-data-into-gae-dev-server-datastore">on the ever helpful Stack Overflow</a><br /><br />This solution was found though a lot of pain, trial and error. I hope someone gets a use out of this post to prevent them agonising over the amount of blood lost from when their head hit the desk screaming "why Google why!!!!"Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-22331226786541299752011-01-21T09:46:00.003+11:002011-01-21T09:55:13.362+11:00How to beat the competition by being more agile<a href="http://blogs.forbes.com/adamhartung/2011/01/14/why-facebook-beat-myspace/">Over at Forbes</a> there's a discussion about how Facebook beat MySpace. The simple solution I got out of that was Facebook is more agile, but that they also priotise customer input. Enough users want something, Facebook gives it to them (and one would assume as fast as possible). That's responding to change, obviously a key Agile principle.<br /><br />Makes me think about how doing business might continue to change over the years. The suits and the bean counters may want to do one thing (lots of Powerpoint presos with forecasts), and the techs on the ground might only be thinking three months ahead - getting the latest feature out the door. There's something to be said for long term vision but do short term sprints take precedence? After all the company has to make money to keep the suits in a job.<br /><br />One thing Facebook does need to do is focus on quality. To much of it breaks (and often non deterministically). The fact that I can't invite friends to an event now for three days means something's going dreadfully wrong. Since you listen to customers Facebook, can you please fix your defective parts as well as push out the latest and greatest features?Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-89307233945362822572010-12-22T23:20:00.004+11:002010-12-23T09:19:16.870+11:00Holidaying with Android FroyoOver the Christmas break, my wife wants to do some traveling. Which is all good and well, except that I have some coding that I want to. I'm sure that a lot people experience this problem.<br /><br />My biggest annoyance was the lack of internet to look up API docs/reference material on the road for some tech that I want to dabble with. So I considered getting a 3G wireless modem.<br /><br />Last week, my HTC Magic got upgraded to Froyo (2.2.1 actually), which of course comes with the ability to <a href="http://developer.android.com/sdk/android-2.2-highlights.html">tether via USB</a> to my computer. A quick google found <a href="https://forums.gentoo.org/viewtopic-t-843255-start-0.html">instructions</a> on how to enable the tethering on Gentoo Linux. A quick kernel config and module compile; followed by some bash scripting to modprobe <b>cdc_ether</b>, <b>rndis_host</b> and <b>usbnet</b> - and I had a <b>usb0</b> interface sitting next to my <b>eth0</b> interface. DHCP takes care of getting an IP address from the phone and I'm connected. If you're a Gentoo user, you should symlink <i>/etc/init.d/net.lo</i> to <i>/etc/init.d/net.usb0</i> like <i>net.eth0</i> for a convenient startup/shutdown script. One of the best bits is that the phone charges off the USB as well so I'm not draining the battery.<br /><br />I even used my tethered connection to load Vodafone's coverage map for where we're going to show to my wife. Though <a href="http://www.vodafail.com/">knowing Vodafone</a> the best laid USB tethering plans of mice and men are oft to go awryKieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-55422543982295585322010-11-22T10:10:00.003+11:002010-11-22T10:11:52.834+11:00Moving OnToday I've started at another company <a href="http://intunity.com.au/">Intunity</a>. Despite having to brush up on my Mac skills :p, this should be fun.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0tag:blogger.com,1999:blog-5048559083735694709.post-999330540961595842010-10-20T14:07:00.003+11:002010-10-20T14:20:20.904+11:00Separating out container concerns for unit testingI got a request - come help me with these failing unit tests. The cause? The dreaded Null Pointer Exception. A member variable wasn't initialised in the constructor, but in in an <i>init()</i> method annotated with <i>javax.annotation.PostConstruct</i>. The inital impulse was to call <i>init</i> in the tests, but this would have led to other problems as other member variables would try to acquire container resources via JNDI (and a ServiceLocater pattern), and since this is a unit test we are mocking those dependencies. The better idea is to factor the initialisation of the attributes that are needed all the time into the constructor (which solved the NPE problem), and those attributes that are holding container resources into the @PostConstruct method since the container will honour the annotation; whereas in the unit test we can setup up the dependencies with mocks. A nice little trick but one that's very powerful.<br /><br />A better approach is of course to use a DI framework, but that still doesn't get around a bad design. Separating container concerns into a @PostConstruct method (and using constructor injection perhaps) makes the class more testable with the tests taking the responsibility for being the container.Kieranshttp://www.blogger.com/profile/09490329565445682251noreply@blogger.com0