Wednesday, December 5, 2012

Reflections on writing an Eclipse plugin

For a personal project I'm working on, the language choice is C++. Now that's probably going to make a lot of readers cringe because C++ has some bad rep, especially with the black magic voodoo that goes by the name of Templates. However due to the requirements of the app, C++ was a good fit. It's not too bad if you apply SOLID principles, design patterns, and above all good naming conventions. IMO a major contributor to C++'s bad reputation is the extremely poor names people use (the STL doesn't help either). We can see the porting back to C++ of good design concepts, most notably in the tools. Libraries like Qt make it almost like having a JDK at ones fingertips (but with a decent UI system). Did you know you can do TDD in C++?

I use the Eclipse CDT for my IDE which when coupled with the Autotools plugin makes it really easy to edit and build my code with most of the features I get out of the Java tools in Eclipse. You don't get everything, but it does a good job.

One thing I was really missing was a way to generate test classes quickly. I use the Google C++ testing framework as it's the best I've found that matches JUnit or TestNG. It's a bit clunky given the state of Java test tooling, it's style is very much on par with JUnit 3, but given C++ doesn't have annotations, it does really well with some macro magic. One can still use BDD style testing with a few compromises to get your code to compile. What I was missing was a "Create test class" option to keep my productivity from getting bogged down in making sure my #includes were correct.

I've been wanting to write an Eclipse plugin for a while, so I thought I'd use this itch as a chance. Turned out it was a lot easier than I had thought.

I downloaded the RCP version of Eclipse as it comes with all the development tools for making plugins. I personally like to keep my Eclipse installs separate for different purposes. About the only common plugin I use across all of them is my Mercurial plugin. I installed the CDT on top of the RCP so that I had access to all the CDT JARs since I was developing a CDT focused plugin.

Starting on writing the plugin was really simple. Of course I googled around first and came across a few helpful links:

The thing that made the plugin development easy IMHO was the plugin configuration editor. I was dreading writing XML and getting it wrong (try debugging that), or writing properties to configure the build (and finding reference documentation to do what I want). Thankfully I didn't get any obscure errors, the editor did it all. Took the fear out of the project. Coupled with some extra helpful documentation on plugin terminology and what not to do; I got my first wizard up and running very quickly.

Since I was making a CDT based plugin, I needed to set my API baseline. This helps the compiler figure out if you've been doing naughty things. This is a real strength, because by letting you know when you've violated API boundaries, you're able to future proof your plugin against changes to internals; or at least tell (via your plugin conf) that you're willing to risk the consequences.

Since I was wanting to create a special instance of the "New Class" wizard option, I subclassed the relevant wizards and wizard pages classes. The one big frustration was that protected methods in these classes used private members, or private inner classes. Which meant that to override the behaviour (but still keep parent behaviour) some nasty reflection hacks were needed. I think the design moral there is that if you are going to allow your methods to be overridden make the data accessible (most likely through protected accessors). Either that or mark the methods as final so that the compiler lets you know that "you can't do that Dave". Unfortunately these CDT classes violate the whole Open Closed principle. Other than that it was actually pretty easy to debug my way through the "New Class" wizard to get an idea of how I could create a specialised class geared towards being a test class and writing the behaviour.

The bulk of the code was written on the train going to/from the YOW 2012 conference so hopefully that conveys just how easy it was once I got going. It's a credit to the PDT guys that banging out a plugin is that easy; in terms of the infrastructure code required to hook it in; I mainly had to work on application logic which is how it should be.

The final result is the Google Testing Framework Generator, so if you can make use of it, please download it and make suggestions/patches. I have a few other extra ideas of code this plugin can generate, but for now I'm just going to TDD my way faster through some new features for my C++ project.

Monday, December 3, 2012

Spring 3 MVC naming gotcha

As the title suggests, I got bitten by a naming issue in Spring 3 (3.0.5) MVC. I wasted a fair amount of time on it, so I don't want to do it again. The relevant material pertaining to this issue may be in the Spring reference documentation however hopefully this blog post will be more concise for anybody else who has had this problem.

In my MVC app I have a ProductsSearcher class that has two dependencies that are injected via the constructor. In my Spring config XML I have the following snippets:

<context:component-scan base-package="org.myproject" />

<bean class="org.myproject.ProductsSearcher"> ... </bean>
The component-scan tag is needed to get Spring to process the @Controller annotation to get the bean registered as a MVC controller However when boot strapping the application context the deployment failed
ERROR [DispatcherServlet] Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 
'org.springframework.web.servlet.mvc.annotation.
DefaultAnnotationHandlerMapping#0': Initialization of bean
failed; nested exception is org.springframework.beans.
factory.BeanCreationException: Error creating bean with
name 'productsSearcher' ... Instantiation of bean failed;
nested exception is org.springframework.beans.
BeanInstantiationException: Could not instantiate bean class [org.
myproject.ProductsSearcher]: No default constructor
found; nested exception is java.lang.NoSuchMethodException: org.
myproject.ProductsSearcher.()
So Spring was trying to instantiate an instance of my class via reflection and borking because there was no default constructor on the class. Why would it do that when I have a bean definition in the config?

I wasn't willing to change the constructor because, good design dictates that if the class can't function without a dependency then the dependency must be injected at construction time; otherwise the object will be in an illegal state (not "fully formed").

I thought it might be something to do with the component scanning as component scanning also involves processing configuration annotations (a good discussion of the XML tags that deal with annotation processing can be found on Stack Overflow.).

The problem lies in the name/id (or lack thereof) of the bean instance. Component scanning does involve Spring creating an instance of the stereotype (ie: my @Controller) but the instance was created with a given name. Given the good old Java Bean naming conventions the name is productsSearcher. By default whenever I create stereotyped bean config I give the bean this style of name. This time I forgot :( :(. Adding in the id fixed the deployment issue

<bean id="productsSearcher" class="org.myproject.ProductsSearcher"> ... </bean>
... </bean>
So my bean definition overrides the default and the correct bean definition is used and the bean is instantiated.

Given that most of us would name our beans using the Java Beans naming convention (since as Spring users it's been beaten into us) this problem had never occurred to me. If anybody has any insight into the workings of the Spring mechanisms behind component scanning please offer up your wisdom in the comments. For the rest of us, remember to name your stereotyped beans properly.

Friday, October 19, 2012

Don't nail your domain to your infrastructure

The client I'm currently working for is in the process of some major rework for a core application, which touches multiple points in the business. This of course means from a modelling perspective that different Domain projects are evolving with different Bounded Contexts and supplementary persistence and service projects to access/manipulate Domain resources. While this is a great effort to separate concerns by the people involved, there comes a time when all that domain work has to touch specific infrastructure resources - namely the dreaded database.

The problem arises when more than one application wants to use the common code but for different target environments. Application1 wants to target DatabaseA while Application2 needs to target DatabaseB. I'm all for having a default configuration of beans provided in the libraries to ease integration with applications with appropriate reference documentation to understand the bean configuration and how to replace it. IMHO this is something that helps make Spring stand tall in the field.

However when a default configuration gets injected with a resource bean (eg: a DataSource), for an application to use that configuration with a different resource (ie: a DataSource pointing to a different DB) then the entire configuration has to be copied and pasted into the application context and jiggled around so that the correct resource bean is injected at ApplicationContext boot time.

Ugly and painful!

What should happen is that a resource factory bean should be specified in domain libaries, with applications providing their own implementation.

This is of course another reason why wiring annotations in code is ultimately self defeating as this sort of architecting becomes more difficult if not impossible to do.

Fortunately we're Agile, so it shouldn't be too hard to clean up.

Monday, July 23, 2012

Making your architecture scream!

[Note: This is a repost of an entry I wrote for my company blog at http://www.intunity.com.au/2012/07/23/making-your-architecture-scream/]

I've mentioned before that I'm a big fan of Robert "Uncle Bob" Martin. One of his latest projects is the Clean Coders "Code Cast" that I've been watching with some Intunity+Client colleagues on a client site. Uncle Bob does have his quirky moments; but it's great to see the discussions that the material brings about within the team I'm working in.

The latest episode on architecture made me think of another project that I worked on where the project was so tightly coupled to the DB that it made it impossible to reuse the domain model (a mate of mine had a similar problem that I've written about before). This hampered development and lead to some ugly code.

UB's point in the episode was that the architecture should scream at you what it does. His example was of a Payroll system that should scream "accounting software" at the coders; not the technologies (eg: web, database, etc) used. Following on from that idea, my thoughts turned to the practise of Domain Driven Design where we want to place as much logic (or behaviour) into the domain model. After all it's the place of the Dog class to tell us how it speaks(). So that means you should develop your domain model first (that which screams at readers what the initial design is) and bolt on other features to meet requirements (with those requirements preferably defined with User Stories in my opinion). The core of the architecture is the model, but with the ability to evolve the model and the other features of the application. This is great for the business because it can get more features released! The model can be exposed/manipulated in a SOA by moving representations of the model (resource) around (ala REST) - or not. Developers haven't been bound to a particular technology which hampers their ability to write useful code for the business.

However there are some business decisions made that can cripple the ability of a team to achieve this outcome; where the architecture consequentially whimpers. Usually that revolves around the purchasing decisions made by the business. In UB's episode the DB was a "bolt on" to the architecture. The DB was used to store information that was given "life" in the domain model. It can be added at the last responsible moment to reduce risk to the business that their purchase will be in vain. The focus of the application was in the model, not the DB product. So what happens to your architecture (and all the benefits of a rich domain model) if your business engages a vendor to consult on your business' architecture who's business model is selling a product (or licenses for a product)?

Like UB, I like to see an architecture that screams at me what it's doing - that way I know that it's benefiting our clients, and that I can punch out feature, after feature, after feature.

Monday, June 25, 2012

Applying TDD to guitar amps

[Note: This is a repost of an entry I wrote for my company blog at http://www.intunity.com.au/2012/06/25/applying-tdd-to-guitar-amps/]

Here at Intunity we have a really big focus on quality.  Making robust software that is flexible enough to meet clients ever changing needs (in a short amount of time) is hard.  That’s why we have all those TLA‘s floating around and other Agile terms that make non Agile practitioners wonder why we’re talking about Rugby all the time.  However once good engineering principles soak in you find that you tend to start applying those principles to other areas of your life.  For myself it was applying TDD to another problem – I thought my guitar amp had broken.  Understandably that made me a really cranky product owner and I wanted it fixed yesterday!

If you don’t know how guitar amps work, there’s a few major sections.  There’s the input, the pre-amp, the effects loop, the power-amp and finally the cab (speakers).  They’re all connected in the order listed.  Like hunting for a bug in a multi layered application I could have dived in and started randomly probing (or breaking out the soldering iron), but just as in software I would have probably wasted a lot of time.  So I “wrote a test”.  I plugged in my trusty electric, tapped my line out jack (the signal is the same that is fed into the power-amp) and played something that I knew would expose the problem.

The result – it’s all good.

I didn’t take it as a failure though as I’d halved my search area all with one test and ~1 minute of time.  Now the power-amp is a dangerous section (if you’re not careful you can get electrocuted) so before I put my affairs in order, I ran another test by plugging the amp into cab and repeating the same input.  The problem remanifested itself.  Jiggling the cable around to test for (and eliminate) a dodgy cable, I found that the cable was loose in cab socket.  Opened the back of the cab and bent one of the metal contacts back so that it touched the cable better and problem was fixed – with no electrocution risk.

All up 3 tests and 10 minutes.

To be honest the story’s pretty boring (unless you’re a guitar gear nerd) but it highlights the value of TDD in that it changes how you think about problems (and solve them).  By testing you’re understanding (or defining) your system.  You’re learning about the problem domain and setting up conditions so that you can prove a problem (if you’re bug hunting) to then fix it.  It’s also pretty obvious when you’re dealing with a piece of physical equipment that will cost you money if you break it; that you might want to be careful and not rush in.  The result ironically is that by being “slower” you’re “faster” because you’re proceeding with knowledge.

One’s software (or client’s software) is the same.  By practising TDD you have less waste, faster code, better understanding and that end product that all developers crave – a working piece of software.

By applying TDD to a more physical problem it only helped me see the more of great advantage in the practise.  After all I do like saving time and money and our clients do too :D

Thursday, May 31, 2012

Ubuntu continues to impress me

The story so far is that my wife's been needing a new laptop. We've slowly been watching her Dell (which was an el cheapo she got ~6 years ago) die. A USB port here, the wireless doing random things there. We've been impressed at how long it held out, but I'm cheap a bargain hunter, and wanted to get something at a good price that's also Linux compatible. Here in Oz Lenovo's having a massive clearance sale. I'd been contemplating a Lenovo because they are pretty rock solid.

This is where the story gets better. I'm sure that every Linux user has had difficulty with hardware drivers. Either the company doesn't make them (eg: some Dell Printers), the drivers are buggy (and being closed source you can't fix it - or google for someone that has), and you just end up pulling out your hair.

Canonical as part of the efforts to reach out to the business world have been certifying hardware configurations. They've been in cahoots with Lenovo to certify laptop models, and low and behold the model I was interested in is 100% Ubuntu compliant.

So I picked up a ThinkPad Edge E520 1143AJ1. Nice i5 processor, integrated graphics (for the occasional game of StarCraft that she likes to play), anti-glare screen (which I had to pay extra for on my MacBook Pro cause Apple are jerks).

It's perhaps not the most stylish of machines, but we can live with that. I was surprised by the weight. It feels lighter than it's actual stated weight, but that's a win I think. The first thing I noticed when booting it up was how SLOW it was!!! This machine is meant to be snappy Lenovo, but you weighted it down with a tonne of bricks. It didn't come with recovery disks, but a recovery partition. Not wanting to lose the 15GB, I burned that to a series of DVDs, then booted up Ubuntu 12.04.

I'm not sure how you can make an installer better over time (I was impressed with earlier versions), but the one thing I noticed especially was the installation of packages onto the system while I was filling out configuration dialogs. Multitasking to the max. The partioning dialog has received some polish since I last did a Ubuntu install so I was able to carve up the HD properly. As a note the Ubuntu documentation has also been improved, so I was able to quickly find the recommended partition sizes and adjust accordingly to need.

Out popped a new computer! Booted it up and everything worked (not surprising really). Configuring system settings was a breeze, and I've noticed some UI similarities to OSX which I don't mind but I wonder how the lawyers feel about that. The boot time was so quick (even for a non SSD) that I don't think I ever want to see Windows 7 again.

So far all is well, but I really wanted to commend Canonical for the constant innovations they're making in Linux/user integration. I'm a Gentoo hacker (when I can these days) and love to play around with config to get an optimal system. But my wife is your "typical" user. She's not going to be rendering video; where some additional CPU flags to ffmpeg during compile can make a significant difference to transcoding time. She's not going to be compiling code, or any other task that requires significant CPU resources. It's email, web browsing, office documents - the usual suspects. I expect the most significant thing the CPU will do is Javascript processing, or the odd game here and there. So having a Linux distro that's easy to obtain, installs quickly and just works out of the box is just so awesome, and inspiring for the future of computing. Hopefully Ubuntu can continue to make serious inroads into communities and thus convert more people to the joys of Linux.

Wednesday, May 16, 2012

Perhaps Spring should move to its own DSL

I've been on and off musing about the differences between Spring's XML configuration and its Java annotations. I've debated this issue with colleagues, and the only answer they're able to give me (reading between the lines) as to why one should use annotations over the XML config boils down to "I don't like XML".

I have to agree somewhat. However XML is a fact of programming life, and while it shouldn't be abused as a configuration language (Spring, JEE, etc), there's sufficient IDE/editor support to make using XML not that painful. You should use the XML config for production code over annotations.

I've never worked on a project where using a logical layout of XML config to declare beans and other services is not understandable quickly (as well as being easily updatable). Contrasted with configuration in code where classes are annotated (which is not the same as a bean def), and where "auto black magic" is applied has lead me to spend a long time digging through code searching for the magic wand.

I've been writing my own DSL for a personal project using Antlr and thus have been influenced by Parr's philosophy that humans shouldn't have to grok XML. I'm not as hardcore as Parr, but I understand the idea. Spring's XML config is fantastic, but should it be written in XML anymore? We've come a long way in tools, Antlr is ubiquitous in the Java world. There's no reason why SpringSource couldn't publish the grammar to allow third party tools to be written to process the Spring DSL. Using tools like Xtext editors could be knocked up to provide the same feature set that the Spring IDE tools provide for editing the XML config (I quite like the autocomplete feature when specifying the class attribute for a bean tag). It would also end the war between XML haters and those who see the value in the text based config. "I hate XML" would no longer be an acceptable answer.

Thursday, January 12, 2012

Flipping out over flipping the bit

I like Uncle Bob Martin. We're nearly finished his book Clean Code in our work study group and I only disagree with ~5% of what he says. ;)

However he's flipped out over Flipping the Bit. Referencing an article by Tim Fischer, UB has decided that because Fischer calls into question the value of doing Unit Tests 100% of the time, Fischer doesn't value testing (I think he does, he just has bad design which makes his testing life harder).

Unit Tests don't obviously equal TDD, because the T of course stands for Tests, but as we know, there are many levels of testing. Unit, integration, end to end, etc. I'm all for TDD, quite strongly in fact that I've occasionally bordered on being a zealot. Here I totally agree with UB's points about testing (TDD to be specific) as bringing higher quality, "cheaper" code into existence. Fischer has it totally wrong that "[unit] tests are little-used for the development of enterprise applications." In my organisation we write Unit Tests all the time (as part of TDD), and they provide a high degree of feedback and value to the project. His point about the cost (purely monetary) of writing Unit Tests is true from a mathematical perspective, however it's a cost worth paying.

Point (1) of UB's list is totally justified in being there. Reading Fischer's post on can easily think that he hasn't grasped the point of TDD because his examples talk about writing the tests after the implementation. UB is right to smack Fisher on the nose about this one.

Sadly in both these posts there's kernels of truth woven in there, and I think UB missed the nugget in Fischer's post which leads to UB's second (erroneous) point:

Unit tests don’t find all bugs because many bugs are integration bugs, not bugs in the unit-tested components.

Why is he wrong? Because Unit Tests != TDD. The jump there was astonishing to my mind. Superman couldn't have jumped that gap better! We do have to justify the existence of test code - but to ourselves not to higher ups or the Unit Test Compliance Squads. What value are these tests adding? How are they proving the correctness of my program and creating/improving my design/architecture?

If you're writing an Adapter (from the Growing Object Oriented Software Guided By Tests book) then Unit Tests add little value to ensuring that the Adapter works correctly because the Adapter is so tightly coupled to the Adaptee that you'd have to essentially replicate the Adaptee in fakes and stubs. Here any bugs that happen in the Adapter will probably not show up in Unit Tests, because those bugs are signs that the developer probably misunderstood the behaviour of the Adaptee for a particular scenario, and therefore would have coded the fake/stub to be incorrect. You've got a broken stub, an incorrect test, but a green light.

An example is an DAO. It is designed to abstract away access to the DB and is tightly coupled to the underlying DB technology (JPA, JDBC, etc). You don't want to Unit Test that. Integration tests add far more value/feedback with less code to maintain. Add in an inmemory DB and you've got easy, fastish tests that have found bugs in my code far too many times than I'd like. Unit Tests at the Adapter level have only in the end been deleted from my teams codebase because they take time (therefore money) to maintain, replicate the testing logic of the Integration Tests and give little feedback about what's going on down there. That's in line with Fischer's gripes. The costs of the tests outweigh the benefits.

Where Fischer goes seriously wrong is that he doesn't add in all forms of testing into his money calculations, and doesn't realise that if you don't do TDD properly (where Unit Tests do play an integral part) you'll spend more money.

His pretty picture is flawed in that SomeMethod() is business logic (a Port) that uses data from several sources. However a Port should never get the data directly; it should always go via an Adapter ("Tell don't ask", SOLID, etc all show how good design ends up with this result). Hence SomeMethod() can be Unit Tested to the Nth degree covering every scenario conceivable because the Adapters can be mocked (which we own and understand hopefully), while the Adapters are Integration Tested. Other wise the amount of code required to setup what is essentially a Unit Test (because we're focused on the SomeMethod() unit) for every scenario for SomeMethod() becomes prohibitive. Developers being developers will slack off and not write them. If they do, the bean counters will get upset because the cost of developing/maintaining the tests increases as the tests are brittle. If there is a bug where is it located? SomeMethod(), the third party "systems", the conduits inbetween? So you spend more time and money tracking down a problem.

This is where Fischer throws the baby out with the bathwater. He has bad design.

I'm surprised the Uncle Bob didn't pick up on this, and instead focused (rightly) on Fischer's points about cost side of not writing Unit Tests, which devolved (wrongly) into a rant about not writing tests at all.

TDD is the way to go (one should flip the bit for that), but Unit Tests are not always beneficial (eg: for Adapters) and can bring little ROI and instead the Integration Tests should be written first, with the Adapter being implemented to pass those tests. Having said that if you're throwing Unit Tests out all together you've got a seriously flawed design.