Monday, July 23, 2012

Making your architecture scream!

[Note: This is a repost of an entry I wrote for my company blog at http://www.intunity.com.au/2012/07/23/making-your-architecture-scream/]

I've mentioned before that I'm a big fan of Robert "Uncle Bob" Martin. One of his latest projects is the Clean Coders "Code Cast" that I've been watching with some Intunity+Client colleagues on a client site. Uncle Bob does have his quirky moments; but it's great to see the discussions that the material brings about within the team I'm working in.

The latest episode on architecture made me think of another project that I worked on where the project was so tightly coupled to the DB that it made it impossible to reuse the domain model (a mate of mine had a similar problem that I've written about before). This hampered development and lead to some ugly code.

UB's point in the episode was that the architecture should scream at you what it does. His example was of a Payroll system that should scream "accounting software" at the coders; not the technologies (eg: web, database, etc) used. Following on from that idea, my thoughts turned to the practise of Domain Driven Design where we want to place as much logic (or behaviour) into the domain model. After all it's the place of the Dog class to tell us how it speaks(). So that means you should develop your domain model first (that which screams at readers what the initial design is) and bolt on other features to meet requirements (with those requirements preferably defined with User Stories in my opinion). The core of the architecture is the model, but with the ability to evolve the model and the other features of the application. This is great for the business because it can get more features released! The model can be exposed/manipulated in a SOA by moving representations of the model (resource) around (ala REST) - or not. Developers haven't been bound to a particular technology which hampers their ability to write useful code for the business.

However there are some business decisions made that can cripple the ability of a team to achieve this outcome; where the architecture consequentially whimpers. Usually that revolves around the purchasing decisions made by the business. In UB's episode the DB was a "bolt on" to the architecture. The DB was used to store information that was given "life" in the domain model. It can be added at the last responsible moment to reduce risk to the business that their purchase will be in vain. The focus of the application was in the model, not the DB product. So what happens to your architecture (and all the benefits of a rich domain model) if your business engages a vendor to consult on your business' architecture who's business model is selling a product (or licenses for a product)?

Like UB, I like to see an architecture that screams at me what it's doing - that way I know that it's benefiting our clients, and that I can punch out feature, after feature, after feature.

Monday, June 25, 2012

Applying TDD to guitar amps

[Note: This is a repost of an entry I wrote for my company blog at http://www.intunity.com.au/2012/06/25/applying-tdd-to-guitar-amps/]

Here at Intunity we have a really big focus on quality.  Making robust software that is flexible enough to meet clients ever changing needs (in a short amount of time) is hard.  That’s why we have all those TLA‘s floating around and other Agile terms that make non Agile practitioners wonder why we’re talking about Rugby all the time.  However once good engineering principles soak in you find that you tend to start applying those principles to other areas of your life.  For myself it was applying TDD to another problem – I thought my guitar amp had broken.  Understandably that made me a really cranky product owner and I wanted it fixed yesterday!

If you don’t know how guitar amps work, there’s a few major sections.  There’s the input, the pre-amp, the effects loop, the power-amp and finally the cab (speakers).  They’re all connected in the order listed.  Like hunting for a bug in a multi layered application I could have dived in and started randomly probing (or breaking out the soldering iron), but just as in software I would have probably wasted a lot of time.  So I “wrote a test”.  I plugged in my trusty electric, tapped my line out jack (the signal is the same that is fed into the power-amp) and played something that I knew would expose the problem.

The result – it’s all good.

I didn’t take it as a failure though as I’d halved my search area all with one test and ~1 minute of time.  Now the power-amp is a dangerous section (if you’re not careful you can get electrocuted) so before I put my affairs in order, I ran another test by plugging the amp into cab and repeating the same input.  The problem remanifested itself.  Jiggling the cable around to test for (and eliminate) a dodgy cable, I found that the cable was loose in cab socket.  Opened the back of the cab and bent one of the metal contacts back so that it touched the cable better and problem was fixed – with no electrocution risk.

All up 3 tests and 10 minutes.

To be honest the story’s pretty boring (unless you’re a guitar gear nerd) but it highlights the value of TDD in that it changes how you think about problems (and solve them).  By testing you’re understanding (or defining) your system.  You’re learning about the problem domain and setting up conditions so that you can prove a problem (if you’re bug hunting) to then fix it.  It’s also pretty obvious when you’re dealing with a piece of physical equipment that will cost you money if you break it; that you might want to be careful and not rush in.  The result ironically is that by being “slower” you’re “faster” because you’re proceeding with knowledge.

One’s software (or client’s software) is the same.  By practising TDD you have less waste, faster code, better understanding and that end product that all developers crave – a working piece of software.

By applying TDD to a more physical problem it only helped me see the more of great advantage in the practise.  After all I do like saving time and money and our clients do too :D

Thursday, May 31, 2012

Ubuntu continues to impress me

The story so far is that my wife's been needing a new laptop. We've slowly been watching her Dell (which was an el cheapo she got ~6 years ago) die. A USB port here, the wireless doing random things there. We've been impressed at how long it held out, but I'm cheap a bargain hunter, and wanted to get something at a good price that's also Linux compatible. Here in Oz Lenovo's having a massive clearance sale. I'd been contemplating a Lenovo because they are pretty rock solid.

This is where the story gets better. I'm sure that every Linux user has had difficulty with hardware drivers. Either the company doesn't make them (eg: some Dell Printers), the drivers are buggy (and being closed source you can't fix it - or google for someone that has), and you just end up pulling out your hair.

Canonical as part of the efforts to reach out to the business world have been certifying hardware configurations. They've been in cahoots with Lenovo to certify laptop models, and low and behold the model I was interested in is 100% Ubuntu compliant.

So I picked up a ThinkPad Edge E520 1143AJ1. Nice i5 processor, integrated graphics (for the occasional game of StarCraft that she likes to play), anti-glare screen (which I had to pay extra for on my MacBook Pro cause Apple are jerks).

It's perhaps not the most stylish of machines, but we can live with that. I was surprised by the weight. It feels lighter than it's actual stated weight, but that's a win I think. The first thing I noticed when booting it up was how SLOW it was!!! This machine is meant to be snappy Lenovo, but you weighted it down with a tonne of bricks. It didn't come with recovery disks, but a recovery partition. Not wanting to lose the 15GB, I burned that to a series of DVDs, then booted up Ubuntu 12.04.

I'm not sure how you can make an installer better over time (I was impressed with earlier versions), but the one thing I noticed especially was the installation of packages onto the system while I was filling out configuration dialogs. Multitasking to the max. The partioning dialog has received some polish since I last did a Ubuntu install so I was able to carve up the HD properly. As a note the Ubuntu documentation has also been improved, so I was able to quickly find the recommended partition sizes and adjust accordingly to need.

Out popped a new computer! Booted it up and everything worked (not surprising really). Configuring system settings was a breeze, and I've noticed some UI similarities to OSX which I don't mind but I wonder how the lawyers feel about that. The boot time was so quick (even for a non SSD) that I don't think I ever want to see Windows 7 again.

So far all is well, but I really wanted to commend Canonical for the constant innovations they're making in Linux/user integration. I'm a Gentoo hacker (when I can these days) and love to play around with config to get an optimal system. But my wife is your "typical" user. She's not going to be rendering video; where some additional CPU flags to ffmpeg during compile can make a significant difference to transcoding time. She's not going to be compiling code, or any other task that requires significant CPU resources. It's email, web browsing, office documents - the usual suspects. I expect the most significant thing the CPU will do is Javascript processing, or the odd game here and there. So having a Linux distro that's easy to obtain, installs quickly and just works out of the box is just so awesome, and inspiring for the future of computing. Hopefully Ubuntu can continue to make serious inroads into communities and thus convert more people to the joys of Linux.

Wednesday, May 16, 2012

Perhaps Spring should move to its own DSL

I've been on and off musing about the differences between Spring's XML configuration and its Java annotations. I've debated this issue with colleagues, and the only answer they're able to give me (reading between the lines) as to why one should use annotations over the XML config boils down to "I don't like XML".

I have to agree somewhat. However XML is a fact of programming life, and while it shouldn't be abused as a configuration language (Spring, JEE, etc), there's sufficient IDE/editor support to make using XML not that painful. You should use the XML config for production code over annotations.

I've never worked on a project where using a logical layout of XML config to declare beans and other services is not understandable quickly (as well as being easily updatable). Contrasted with configuration in code where classes are annotated (which is not the same as a bean def), and where "auto black magic" is applied has lead me to spend a long time digging through code searching for the magic wand.

I've been writing my own DSL for a personal project using Antlr and thus have been influenced by Parr's philosophy that humans shouldn't have to grok XML. I'm not as hardcore as Parr, but I understand the idea. Spring's XML config is fantastic, but should it be written in XML anymore? We've come a long way in tools, Antlr is ubiquitous in the Java world. There's no reason why SpringSource couldn't publish the grammar to allow third party tools to be written to process the Spring DSL. Using tools like Xtext editors could be knocked up to provide the same feature set that the Spring IDE tools provide for editing the XML config (I quite like the autocomplete feature when specifying the class attribute for a bean tag). It would also end the war between XML haters and those who see the value in the text based config. "I hate XML" would no longer be an acceptable answer.

Thursday, January 12, 2012

Flipping out over flipping the bit

I like Uncle Bob Martin. We're nearly finished his book Clean Code in our work study group and I only disagree with ~5% of what he says. ;)

However he's flipped out over Flipping the Bit. Referencing an article by Tim Fischer, UB has decided that because Fischer calls into question the value of doing Unit Tests 100% of the time, Fischer doesn't value testing (I think he does, he just has bad design which makes his testing life harder).

Unit Tests don't obviously equal TDD, because the T of course stands for Tests, but as we know, there are many levels of testing. Unit, integration, end to end, etc. I'm all for TDD, quite strongly in fact that I've occasionally bordered on being a zealot. Here I totally agree with UB's points about testing (TDD to be specific) as bringing higher quality, "cheaper" code into existence. Fischer has it totally wrong that "[unit] tests are little-used for the development of enterprise applications." In my organisation we write Unit Tests all the time (as part of TDD), and they provide a high degree of feedback and value to the project. His point about the cost (purely monetary) of writing Unit Tests is true from a mathematical perspective, however it's a cost worth paying.

Point (1) of UB's list is totally justified in being there. Reading Fischer's post on can easily think that he hasn't grasped the point of TDD because his examples talk about writing the tests after the implementation. UB is right to smack Fisher on the nose about this one.

Sadly in both these posts there's kernels of truth woven in there, and I think UB missed the nugget in Fischer's post which leads to UB's second (erroneous) point:

Unit tests don’t find all bugs because many bugs are integration bugs, not bugs in the unit-tested components.

Why is he wrong? Because Unit Tests != TDD. The jump there was astonishing to my mind. Superman couldn't have jumped that gap better! We do have to justify the existence of test code - but to ourselves not to higher ups or the Unit Test Compliance Squads. What value are these tests adding? How are they proving the correctness of my program and creating/improving my design/architecture?

If you're writing an Adapter (from the Growing Object Oriented Software Guided By Tests book) then Unit Tests add little value to ensuring that the Adapter works correctly because the Adapter is so tightly coupled to the Adaptee that you'd have to essentially replicate the Adaptee in fakes and stubs. Here any bugs that happen in the Adapter will probably not show up in Unit Tests, because those bugs are signs that the developer probably misunderstood the behaviour of the Adaptee for a particular scenario, and therefore would have coded the fake/stub to be incorrect. You've got a broken stub, an incorrect test, but a green light.

An example is an DAO. It is designed to abstract away access to the DB and is tightly coupled to the underlying DB technology (JPA, JDBC, etc). You don't want to Unit Test that. Integration tests add far more value/feedback with less code to maintain. Add in an inmemory DB and you've got easy, fastish tests that have found bugs in my code far too many times than I'd like. Unit Tests at the Adapter level have only in the end been deleted from my teams codebase because they take time (therefore money) to maintain, replicate the testing logic of the Integration Tests and give little feedback about what's going on down there. That's in line with Fischer's gripes. The costs of the tests outweigh the benefits.

Where Fischer goes seriously wrong is that he doesn't add in all forms of testing into his money calculations, and doesn't realise that if you don't do TDD properly (where Unit Tests do play an integral part) you'll spend more money.

His pretty picture is flawed in that SomeMethod() is business logic (a Port) that uses data from several sources. However a Port should never get the data directly; it should always go via an Adapter ("Tell don't ask", SOLID, etc all show how good design ends up with this result). Hence SomeMethod() can be Unit Tested to the Nth degree covering every scenario conceivable because the Adapters can be mocked (which we own and understand hopefully), while the Adapters are Integration Tested. Other wise the amount of code required to setup what is essentially a Unit Test (because we're focused on the SomeMethod() unit) for every scenario for SomeMethod() becomes prohibitive. Developers being developers will slack off and not write them. If they do, the bean counters will get upset because the cost of developing/maintaining the tests increases as the tests are brittle. If there is a bug where is it located? SomeMethod(), the third party "systems", the conduits inbetween? So you spend more time and money tracking down a problem.

This is where Fischer throws the baby out with the bathwater. He has bad design.

I'm surprised the Uncle Bob didn't pick up on this, and instead focused (rightly) on Fischer's points about cost side of not writing Unit Tests, which devolved (wrongly) into a rant about not writing tests at all.

TDD is the way to go (one should flip the bit for that), but Unit Tests are not always beneficial (eg: for Adapters) and can bring little ROI and instead the Integration Tests should be written first, with the Adapter being implemented to pass those tests. Having said that if you're throwing Unit Tests out all together you've got a seriously flawed design.

Tuesday, November 29, 2011

Eclipse moving forwards

Most Java devs (and others too) have a love/hate relationship with Eclipse. Many a flame war debate has been had on the subject.

From my own personal experience, I think Eclipse is moving forwards in the right direction. Helios and Indigo both feel snappier, and the installation/upgrading of plugins is easier.

I had to upgrade my work instance of Eclipse and even though there were conflicting versions of different plugins, it was easier to resolve than in previous versions of Eclipse.

Some of the other language editors could do with some love (PHP for example) but overall I'm finding my productivity is increasing with the later versions of Eclipse and I have to wrestle with it less.

Wednesday, June 22, 2011

Server configuration with Mercurial

I've been playing a lot lately with Mercurial, and in my opinion it's the best SCM around. I also have been administering some of my servers (actually Rackspace VMs) and ran into the age old problem that sys admins have had; keeping the server config synchronised.

The problem in a nutshell is that you install a set of applications (through yum or apt-get or whatever) and configure them. However you run into the problem of config propagation, versioning/history, rolling back to a known configuration, etc. I've seen a few sys admins roll their own solution, usually involving rsync and a lot of logging.

My solution was to use Hg to do all the heavy lifting, with a wrapper bash script that I knocked up very quickly that invokes Hg to version configuration files. It maintains knowledge about where the file came from by copying the file to be relative to the repository directory. For example if a user was to edit /etc/hosts the file will reside in the repository at $REPOS_HOME/etc/hosts

How it works is that you run
$ editconf <file>
the script does the following.

  1. Resolves the absolute path of a file (using a realpath bash script that a friend knocked up)

  2. Checks if the file exists in the repository


    1. If the file doesn't exist the file is copied and added to the repository


  3. Drops through to the users editor (defaults to vim)

  4. Copies the file to the repository when the user exits the editor

  5. Attempts to commit the file


If the user is created for the first time (ommitting the initial check), the script is smart enough to add it first. Mistakes are fixed by leaving an empty commit message (most SCMs wont commit of course), and reverting the file.

Branches can be made, merged, and state can be pushed around various servers with very little effort.

I mentioned this approach to some people and they thought it was a nifty idea and asked me to share my scripts. They can be found on BitBucket under the nesupport[1] SysTools project. They're licensed under the MPL, and since they were knocked up in a hurry patches/feedback are always welcome. Further instructions are found in the scripts themselves (if further action is required).

Further work could include the automated sharing of repository state (cron job) and synchronising what's in the repo with what's on the filesystem.

[1] The code was developed for a project I'm working on with somebody else who agreed to open source our sys admin scripts, of which editconf is a part.