Monthly Archives: April 2013

“It’s more of an art than a science”

I’ll be honest, this phrase bothers me. Perhaps it’s because I’m a scientist by training. Perhaps it’s because this seems to be a misuse of the work ‘art’ or a misinterpretation on my part. But whenever I hear it used with reference to software development, I hear: “we use heuristics and guesswork because we don’t have time to do research and there is no body of research from which to draw”. Does that really make the solution to an underlying question or problem an ‘art’ rather than a science?

I of course tried googling the phrase to determine what it’s supposed to mean, but didn’t get very far. The top result from my search was:

It means it is not something which is governed by clearly-defined rules, as a science would be. In science things are either right or wrong; in psychology (or any art) it’s not possible to say what is ‘right’ or ‘wrong’.

This particular answer seems to misunderstand what science is. In essence, it’s a way to understand the way the world works through experimentation and verification. It’s also typically methodical because reproducibility of research results is important.

Perhaps the reference to rules is based on experience with things like mechanics in Physics and Newton’s laws of motion. We can predict the trajectory of projectiles in the air for example. But this is a very limited view of science. I have been watching the programme Horizon on the BBC recently and learned about the science of taste and even creativity. Yes, we’re learning through science about how creativity works!

At the end of the 19th Century and beginning of the 20th Century, we thought that we would soon learn everything there was to learn about the world. Over a century later, there still seems to be no end to the growth of our scientific knowledge. And things that used to be firmly considered “arts” are much less so now.

Consider cooking: more and more chefs are learning the basic science of both taste and cooking. From that base of understanding they can be even more creative in what they do. It allows people like Heston Blumenthal to create bacon and egg ice cream or snail porridge. If you’re interested, McGee on food and cooking is an essential read on the underlying science.

This also highlights an important point: creativity and science are in no way mutually exclusive. In fact, each enables the other. As I mentioned, a scientific base allows for more creativity because of the deeper understanding of how things work, but creativity is also essential in providing insights into how things work.

Coming back to the original point of this post, my ire was recently raised by a discussion on Hacker News where someone wrote

I’m not sure that there is a sure-fire way to quantify what tests are or are not necessary. In my opinion, this is something that comes with experience and is more of an art than a science. But I’m okay with that.

This seems innocuous enough and I wouldn’t be surprised if many people agree with it. But do we really think that it’s not possible to learn through research what a good level of tests is? Software is typically structured and full of patterns, so the pool of possible structures to investigate is limited. In addition, we already have tools to detect cyclomatic complexity and other metrics of software, so would it be so hard to determine which parts of the software are involved with the “critical” features?

I think what bothers me the most is that despite the huge revenue of the industry as a whole, and how much money depends on the successful completion of IT projects, so little research seems to be done to help improve the software development process. Perhaps the research is being done but it’s not widely disseminated. But I would at least have expected to come across research to back up the claims of agile practitioners (as one example). Not that I necessarily disagree with what they say, but it seems that going agile requires more faith than should be necessary.

Does the software development industry and community require a more scientific mindset? What do you think?

Where next for Grails?

A time comes for every open source project when it has to take a step back, reflect on the past and decide where it needs to go next. The world rarely stays the same as when the project was born and in the tech world things change year on year. Back when Grails first came out, Java web frameworks were still a pain to configure and you had to pull together a whole host of libraries to do what you wanted. The use of conventions was rare and ease of use didn’t seem to be a high priority.

Now in 2013 there are probably hundreds of web frameworks for the JVM, most of which have learned lessons from Rails and its ilk. In addition, we have Node.js spawning lookalikes and derivatives (such as Meteor) aiming to provide lightweight servers that can handle hundreds of thousands of concurrent connections through asynchronous IO. In this world, what is Grails’ value proposition and where should it be heading?

This is a pretty long post, so if you want to skip straight to my suggestions, head on down to the conclusion at the end of the article. But I’m hopeful you’ll find value in the rest of the article!

Continue reading