Friday, November 15, 2013

Using DeltaWalker with Mercurial for diffs

I bought a bundle of apps with a deal, and got DeltaWalker as part of the bundle. It's a reasonably decent diff tool, with the ability to do folder merging as well (rsync anybody ;) ). In terms of interface it's not bad, nice layout, colours, etc. I haven't used it much because most of my diff work is handled by my JetBrains tools as I'm developing.

For my current project I'm doing a reasonably complicated merge between two feature branches to bring them into alignment. I'm not a fan of Feature Branches as they impede Continuous Integration (FeatureToggles are a better approach), and the fact that my merge is complicated is living proof as to why they should be avoided. However due to particular circumstances a Feature Branch was unavoidable.

With part of the codebase there's not a 1-1 mapping of files. For example a class that was an inner class in Branch A is now in it's own Java file in Branch B. So it turned out to be easier to see what changed in Branch A in a file and port the concepts/semantics of the changes across to Branch B instead of just relying on a syntactical/textual merge. This worked really well because even though the implementation had diverged quite significantly, the interfaces were reasonably unchanged so my Branch A Tests changes merged to Branch B very easily, which would help inform me if I stuffed up the merge of a file. Another win for TDD.

Mercurial's diff commands were powerful enough to show me the changes for a file between the last merge point from Branch A to Branch B to the head of Branch A. However since I like my graphical diff tools, I wanted to try out DeltaWalker to see how good it was.

DELTAWALKER DOCUMENTATION SUCKS - It doesn't actually tell you HOW to integrate with your SCM tools. This is the WORST form of marketing and almost made me toss the tool. Fortunately I was able to figure it out.

When you select Hg in the DeltaWalker and point it to your Hgrc file it updates the file for you. Would be nice if it explains that it does that! I found out about it because I version control my Hg config (with Mercurial of course), and running a diff on the file, for another update I made caused me to see the changes.

[extdiff]
cmd.dw = /Applications/DeltaWalker.app/Contents/MacOS/hg

[merge-tools]
dw.executable = /Applications/DeltaWalker.app/Contents/MacOS/hg
dw.args = $local $other $base -merged=$output
So one can then use Hg's extdiff functionality to load up the diffs.
$ hg dw -r $lastmerge:$branch_head file
It actually works well, but they guys behind the product need to update their docs!!

Finding the last merge point between branches in Mercurial

When merging between my development and release branches, I like to know what the last merge point was between the two so that I can do tasks like compile Release Notes or see if some code needs to be merged (based on tags).

I developed a little Hg alias that finds the last merge node between two branches.

lastmerge = !hg log -r "children(ancestor($1, $2)) and merge() and ::$2"

$ hg lastmerge BRANCH_1 BRANCH_2
In the example above, BRANCH_1 and BRANCH_2 are bookmarks. The alias makes use of Hg Revsets to find the latest merge node from BRANCH_1 into BRANCH_2

Wednesday, September 4, 2013

Getting Symfony2 form names working with AngularJS expressions

The default form field names generated by Symfony2's FormBuilder appear to not work work with AngularJS' form validation and directives like ng-show, unless one is proficient in Javascript, to deduce other syntatical options. The reason is that all the documentation examples for AngularJS expressions use Javascript's dot object notation, which can be confusing about what's really going on. A skilled JS reader might guess the answer to this issue before the end of this post. Judging by the amount of trouble people have had with this issue on various forums, the answer might not be so obvious. It had me stumped for a while.

So the conventional way to generate a HTML form via Symfony2 is:

In a Symfony Controller, generate the form:

    class MyController extends Controller {
      /**
       * @Route("/")
       * @Method("GET")
       * @Template
       */
      public function indexAction() {
        $myEntity = new MyEntity();
    
        $form = $this->createForm(new MyEntityType(), $myEntity);
    
        return array("form" => $form->createView());
      }
    }
Using the following entity (Doctrine) class:
    class MyEntity {
      ...
    
      /**
       * @var string
       *
       * @Assert\NotBlank(message="Please provide a contact name")
       * @ORM\Column(name="contactName", type="string", length=50)
       */
      private $contactName;
    }
Using the following type for form building:
    class MyEntityType extends AbstractType {
      ...
    
      public function getName() {
        return 'myEntityType';
      }
    }
Using the form.name variable in a template (ie: <form name="{{ form.name }}">) we get something like this (yes I'm using Twitter Bootstrap in my project):
    <div class="controls controls-row">
        <input id="myEntityType_contactName" class="ng-pristine ng-invalid ng-invalid-required" type="text"
            ng-model="myEntity.contactName" maxlength="50" required="required" name="myEntityType[contactName]">
      <span ng-show="myEntityType.myEntityType[contactName].$error.required" style="display: none;">
    </div>
I wrote some Twig code to add the ng directives to the generated HTML. Those blocks use the form values generated by the builder. For example, Symfony renders the name attribute on the <input> tag and my extension code uses the same name string as part of the ng-show generation.

If one enters/deletes text the <span> is not shown/hidden.

Doing some research, I found some StackOverflow posts Symfony2 Form Component - creating fields without the forms name in the name attribute and Symfony2.1 using form with method GET. If we follow their advice and change MyEntityType to have getName() return an empty string, the resulting HTML is:

    <div class="controls controls-row">
        <input id="contactName" class="ng-pristine ng-invalid ng-invalid-required" type="text" ng-model="myEntity.contactName"
            maxlength="50" required="required" name="contactName">
      <span ng-show="formName.contactName.$error.required" style="">
    </div>
Of course I had to manually give the form a name in the template ie: <form name="formName">

As pointed out in the first StackOverflow post above; it's the formatting of the name in Symfony/Component/Form/Extension/Core/Type/FieldType.php (or Symfony/Component/Form/Extension/Core/Type/BaseType.php if you're on Symfony 2.3) that's causing the problem.

I did not like those suggestions for a solution, but it did help me narrow down what was going on. I did not like the idea clobbering how Symfony generates forms, because we were having to adapt our server code (by altering the getName() method to deal with a client side technology issue. To me that seems wrong.

The reason that the field name was causing problems, is that AngularJS evaulates the string contents of the ng-show attribute as an expression, and in ng expressions, you can use square brackets to create arrays/objects. Consquently the square brackets that Symfony uses in the field name causes expession evaulation issues in AngularJS.

The solution is to use Javascript's property style accessors for objects (square brackets) over the dot object notation.. Thus we can have the square brackets in a string and there is no evalation issue.

    <div class="controls controls-row">
        <input id="myEntityType_contactName" class="ng-pristine ng-invalid ng-invalid-required" type="text"
            ng-model="myEntity.contactName" maxlength="50" required="required" name="myEntityType[contactName]">
      <span ng-show="myEntityType['myEntityType[contactName]'].$error.required" style="display: none;">
    </div>
It all works. Just had to update my Twig code to generate the correct JS notation.

Alas, I can't take credit for the correct solution as someone else reported it as an issue against AngularJS. I guess that we think about, and write our code in one particular style for so long, that we forget about alternatives. However this post helps pull together a variety of threads about this topic, so hopefully the next person who has the problem just has to read this blog post.

Sunday, February 3, 2013

Remapping RaspBMC Remote Keybindings

I've updated my HTPC to be a Raspberry Pi that uses RaspBMC as the OS. The really hard part was integrating my existing RAID with the device (over USB) to serve up my media; that's a subject for another post.

Something that didn't quite work out of the RaspBMC box (and to be fair A LOT did work) was the Context Menu for XBMC. I like to call it the "right click" option if I was using a mouse. This was because my remote didn't have the default button that XBMC was expecting. As mentioned in a previous post I have a generic RC6 IR receiver that I had used the mceusb2 kernel driver. Looking at the remote config file for XBMC (/opt/xbmc-bcm/xbmc-bin/share/xbmc/system/keymaps/remote.xml) the ContextMenu is mapped to the <title> key. Looking in the LIRC config file for XBMC (/opt/xbmc-bcm/xbmc-bin/share/xbmc/system/Lircmap.xml) for a mceusb remote the "title" is mapped to the "Guide" key. Looking on my trusty remote diagram there was a Guide button, but why didn't it bring up the context menu?

Turns out that RaspBMC was using a different remote config in Lircmap.xml Using irw to see the remote codes I get:

16d 0 KEY_EPG devinput
16d 0 KEY_EPG_UP devinput
Searching Lircmap.xml for devinput I got a different <remote> profile, with no mapping for <title> So I added:
<title>KEY_EPG</title>
and the Context Menu is displayed.

Handy little util irw for figuring out remote config.

Wednesday, December 5, 2012

Reflections on writing an Eclipse plugin

For a personal project I'm working on, the language choice is C++. Now that's probably going to make a lot of readers cringe because C++ has some bad rep, especially with the black magic voodoo that goes by the name of Templates. However due to the requirements of the app, C++ was a good fit. It's not too bad if you apply SOLID principles, design patterns, and above all good naming conventions. IMO a major contributor to C++'s bad reputation is the extremely poor names people use (the STL doesn't help either). We can see the porting back to C++ of good design concepts, most notably in the tools. Libraries like Qt make it almost like having a JDK at ones fingertips (but with a decent UI system). Did you know you can do TDD in C++?

I use the Eclipse CDT for my IDE which when coupled with the Autotools plugin makes it really easy to edit and build my code with most of the features I get out of the Java tools in Eclipse. You don't get everything, but it does a good job.

One thing I was really missing was a way to generate test classes quickly. I use the Google C++ testing framework as it's the best I've found that matches JUnit or TestNG. It's a bit clunky given the state of Java test tooling, it's style is very much on par with JUnit 3, but given C++ doesn't have annotations, it does really well with some macro magic. One can still use BDD style testing with a few compromises to get your code to compile. What I was missing was a "Create test class" option to keep my productivity from getting bogged down in making sure my #includes were correct.

I've been wanting to write an Eclipse plugin for a while, so I thought I'd use this itch as a chance. Turned out it was a lot easier than I had thought.

I downloaded the RCP version of Eclipse as it comes with all the development tools for making plugins. I personally like to keep my Eclipse installs separate for different purposes. About the only common plugin I use across all of them is my Mercurial plugin. I installed the CDT on top of the RCP so that I had access to all the CDT JARs since I was developing a CDT focused plugin.

Starting on writing the plugin was really simple. Of course I googled around first and came across a few helpful links:

The thing that made the plugin development easy IMHO was the plugin configuration editor. I was dreading writing XML and getting it wrong (try debugging that), or writing properties to configure the build (and finding reference documentation to do what I want). Thankfully I didn't get any obscure errors, the editor did it all. Took the fear out of the project. Coupled with some extra helpful documentation on plugin terminology and what not to do; I got my first wizard up and running very quickly.

Since I was making a CDT based plugin, I needed to set my API baseline. This helps the compiler figure out if you've been doing naughty things. This is a real strength, because by letting you know when you've violated API boundaries, you're able to future proof your plugin against changes to internals; or at least tell (via your plugin conf) that you're willing to risk the consequences.

Since I was wanting to create a special instance of the "New Class" wizard option, I subclassed the relevant wizards and wizard pages classes. The one big frustration was that protected methods in these classes used private members, or private inner classes. Which meant that to override the behaviour (but still keep parent behaviour) some nasty reflection hacks were needed. I think the design moral there is that if you are going to allow your methods to be overridden make the data accessible (most likely through protected accessors). Either that or mark the methods as final so that the compiler lets you know that "you can't do that Dave". Unfortunately these CDT classes violate the whole Open Closed principle. Other than that it was actually pretty easy to debug my way through the "New Class" wizard to get an idea of how I could create a specialised class geared towards being a test class and writing the behaviour.

The bulk of the code was written on the train going to/from the YOW 2012 conference so hopefully that conveys just how easy it was once I got going. It's a credit to the PDT guys that banging out a plugin is that easy; in terms of the infrastructure code required to hook it in; I mainly had to work on application logic which is how it should be.

The final result is the Google Testing Framework Generator, so if you can make use of it, please download it and make suggestions/patches. I have a few other extra ideas of code this plugin can generate, but for now I'm just going to TDD my way faster through some new features for my C++ project.

Monday, December 3, 2012

Spring 3 MVC naming gotcha

As the title suggests, I got bitten by a naming issue in Spring 3 (3.0.5) MVC. I wasted a fair amount of time on it, so I don't want to do it again. The relevant material pertaining to this issue may be in the Spring reference documentation however hopefully this blog post will be more concise for anybody else who has had this problem.

In my MVC app I have a ProductsSearcher class that has two dependencies that are injected via the constructor. In my Spring config XML I have the following snippets:

<context:component-scan base-package="org.myproject" />

<bean class="org.myproject.ProductsSearcher"> ... </bean>
The component-scan tag is needed to get Spring to process the @Controller annotation to get the bean registered as a MVC controller However when boot strapping the application context the deployment failed
ERROR [DispatcherServlet] Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 
'org.springframework.web.servlet.mvc.annotation.
DefaultAnnotationHandlerMapping#0': Initialization of bean
failed; nested exception is org.springframework.beans.
factory.BeanCreationException: Error creating bean with
name 'productsSearcher' ... Instantiation of bean failed;
nested exception is org.springframework.beans.
BeanInstantiationException: Could not instantiate bean class [org.
myproject.ProductsSearcher]: No default constructor
found; nested exception is java.lang.NoSuchMethodException: org.
myproject.ProductsSearcher.()
So Spring was trying to instantiate an instance of my class via reflection and borking because there was no default constructor on the class. Why would it do that when I have a bean definition in the config?

I wasn't willing to change the constructor because, good design dictates that if the class can't function without a dependency then the dependency must be injected at construction time; otherwise the object will be in an illegal state (not "fully formed").

I thought it might be something to do with the component scanning as component scanning also involves processing configuration annotations (a good discussion of the XML tags that deal with annotation processing can be found on Stack Overflow.).

The problem lies in the name/id (or lack thereof) of the bean instance. Component scanning does involve Spring creating an instance of the stereotype (ie: my @Controller) but the instance was created with a given name. Given the good old Java Bean naming conventions the name is productsSearcher. By default whenever I create stereotyped bean config I give the bean this style of name. This time I forgot :( :(. Adding in the id fixed the deployment issue

<bean id="productsSearcher" class="org.myproject.ProductsSearcher"> ... </bean>
... </bean>
So my bean definition overrides the default and the correct bean definition is used and the bean is instantiated.

Given that most of us would name our beans using the Java Beans naming convention (since as Spring users it's been beaten into us) this problem had never occurred to me. If anybody has any insight into the workings of the Spring mechanisms behind component scanning please offer up your wisdom in the comments. For the rest of us, remember to name your stereotyped beans properly.

Friday, October 19, 2012

Don't nail your domain to your infrastructure

The client I'm currently working for is in the process of some major rework for a core application, which touches multiple points in the business. This of course means from a modelling perspective that different Domain projects are evolving with different Bounded Contexts and supplementary persistence and service projects to access/manipulate Domain resources. While this is a great effort to separate concerns by the people involved, there comes a time when all that domain work has to touch specific infrastructure resources - namely the dreaded database.

The problem arises when more than one application wants to use the common code but for different target environments. Application1 wants to target DatabaseA while Application2 needs to target DatabaseB. I'm all for having a default configuration of beans provided in the libraries to ease integration with applications with appropriate reference documentation to understand the bean configuration and how to replace it. IMHO this is something that helps make Spring stand tall in the field.

However when a default configuration gets injected with a resource bean (eg: a DataSource), for an application to use that configuration with a different resource (ie: a DataSource pointing to a different DB) then the entire configuration has to be copied and pasted into the application context and jiggled around so that the correct resource bean is injected at ApplicationContext boot time.

Ugly and painful!

What should happen is that a resource factory bean should be specified in domain libaries, with applications providing their own implementation.

This is of course another reason why wiring annotations in code is ultimately self defeating as this sort of architecting becomes more difficult if not impossible to do.

Fortunately we're Agile, so it shouldn't be too hard to clean up.