A week in hell

“Shit always comes in threes” I remember a colleague saying this halfway through last week. Well, guess what? It was on discount and for this one week came in eights!

Developers at our company share the responsibility for the operations of our live environment. We have a (usually) competent hosting partner who helps us with the hardware, networking and the operating systems. From there on in, it’s up to us. We manage all of the installation, configuration and maintenance of the middleware and our own personal applications. In order to be available 24×7, we have a weekly rotating shift of Support. This usually means you keep your phone about you, do everything you always do and get a little extra cash at the end of the month. Last week took a different road. Get ready for a long story, or while you still can, skip to the part where I put down my lessons of the week.

Tuesday, too early, 6:00 I wake up because of the phone, find out that the base station is ringing, but our portable is empty, so I fly out of bed and run downstairs to try and catch the call on that handset. Too late, just missed it. I check the caller ID and find out that it is our hosting partner, let’s call them Earthgoal for now. Knowing that Earthgoal only calls when there is something wrong, I turn on my laptop, put on a sweater and call back. Our biggest site is throwing errors left and right and is basically dead for all intents and purposes, it just doesn’t realize it fully. I’m not quite awake yet, but I try to shake off my fogginess and start my investigation round. Simply putting backups back doesn’t always do the trick, so I had to find out if that would help. After a long time I realize I am getting nowhere so I decide to call a colleague out of bed to brighten up his day and see if he can help me out. We investigate some further and luckily find a cause.

It turned out that the batch process that refreshes the data on our site, had somehow managed to corrupt our search engine and that our sites were erroring because of that. The batch process has a lot of failsafes, so this was not an easy thing to accomplish, but somehow it happened. We started a rollback to the data of the day before and went on to shower and get to work. Of course when we got to work, we realized that we had kind of screwed up. The rollback we did had wiped out all of the corrupt data and all of the logging. We asked our search engine partner, let’s call them FrankJumper, but FrankJumper couldn’t figure it out without anything to look at. DOH! This was going to happen again, all you have to do is wait for it.

Tuesday, quite inconvenient, 20:43 Of course right when all the guests come in and the celebration really starts at my father-in-law’s birthday, I get a call from Earthgoal. Although they are always quite friendly, being from soft-spoken Belgium and all, it is not really a pleasant phone call to receive. The guy on the phone tells me that one of our webservers is down. Of course we are redundant on many levels, but having a server out is really not a good thing for performance. Time for action!

I hadn’t really counted on being called, so I had left my laptop at home. I try to do a bit of an investigation on my phone, but get annoyed quickly. We only live about a kilometer from my in-laws, so I walk home to my trusty laptop. I take a look at the server, and yeah,  it is quite down. I restart it, it comes back up and after checking the logs I determine that everything is running smoothly again. Time to go join the celebration.

10 minutes later I realized that it really wasn’t my sharpest day. I had restarted the server, but had forgotten to obtain an autopsy-kit. We do of course store our log files, but I didn’t have a heap dump or anything for the other developers to analyze in the morning. DOH! This was going to happen again, all you have to do is wait for it.

Wednesday Nothing. Really, nothing!

Thursday, making up for a calm day, 6:00 As I drag myself from the bed, kicking and screaming, I know it must be Earthgoal calling. This time the portable still has juice and when I pick up, the gentle voice on the other end of the line is indeed carrying a Belgian accent. He announces that one of our FrankJumper servers has died and asks me whether he can help out. I’m already over my laptop, so I ask him for the details. After discussing the issue with him for a bit, I come to a completely unrelated conclusion: none of our sites work.  They’re dead Jim, they’re all dead!

Our sites are all completely siloed, the only cross cutting concern is basically our network infrastructure. This same network had had scheduled maintenance done at 0:00 the night before. I already know it, but the friendly Belgian voice still attempts to find fault with our sites. I check a few things for and with him and then he goes quiet and informs me that he is going to ring up some colleagues after which he hangs up.

My problem with the FrankJumper server of course still remained. Earthgoal hadn’t called for nothing, so I checked whether this time we did still have working FrankJumpers and luckily we did. Our batch process had not gotten to the point yet where all of our servers were refreshed with corrupt data, so it was safe to kill that process. I left our sites running on little over half-strength so that the FrankJumper crew would have something to do an autopsy on.

After taking a shower and having some breakfast, Earthgoal called. They had found the problem and fixed it. It turned out that their maintenance had taken all of our servers out of the loadbalancer. I informed everybody that we were back online. We had been down for little under 8 hours because someone forgot to check whether everything still worked. Amazing!

Thursday, DOH, 9:00 I head for work and get there after a freezing ride on my bicycle and a nice work train journey. At work one of my colleagues informs me that one of our sites is still down and asks whether I know what is up.We investigate together, but quicly arrive at the conclusion that the problem of the morning had never been fixed for this site.

Earthgoal had “forgotten” a set of servers and I only checked two of our six sites. Big mistake, we had been down without any decent reason for another hour and a half with that site. All because both I and Earthgoal had forgotten to check well.

That day however, the guys from FrankJumper did their autopsy and found the cause for the problem that I had had such a great time with on Tuesday and Thursday morning. It turned out that our batchprocess was calling the wrong script after it had sent fresh data to the FrankJumper search servers. This script worked most of the times, but sometimes, just sometimes, it would not work properly. Don’t ask em why you would want to have such a script in the first place, but at least now we knew what script to call. The lovely thing was that we had never run into this problem in 2 years of using this same script. Go figure, of course it decides to die on my week.

Thursday, Thanks!, 23:00 Being a little paranoid about our batch process after having had such a wonderful run with it, I check on it’s progress in the evening before going to bed. Of course my intentions are to make sure that I can have a quiet night. As I open up my mail however, I see error reports for each of the different runs of the batchprocess and it turns out that the batchprocess  had already collapsed completely for all of our sites. This batch process is also completely siloed, so there aren’t that many things that can cause this. I start investigating, but what I find simply didn’t make any sense. The process had collapsed on data that was completely corrupt in one of our core databases. The database itself doesn’t enforce referential integrity (performance is a b***h), so somehow there is data referencing records that aren’t there at all. I send an email to my data colleagues and go to bed, certain that my next day is going to be another great day filled with data problems.

One of my data colleagues, let’s call him Pilbert, had made a slight mistake on one of his new transformations. Pilbert had taken the wrong ID field when inserting fresh data and of course we have no quality assurance whatsoever in this department. He fixed it quickly and was of course quite sorry, but the damage was already done. I had spent another hour and a half working when I should have been sleeping and we did not have fresh data on any of our sites yet another day.

Saturday, right when you need it, 17:30 Earthgoal calls, right as my wife is about to leave for work. She helps me a bit, but is already late so she has to leave me and our two great but very young kids in my care along with the problem I had just gotten. One of our web servers is down and it looks like what I encountered before this week. I decide to do it right this time. With a crying baby in my arms I find the wiki pages that help me collect the data that I need and then restart our servers. I write an issue about the situation and attach the details. Nothing more to do on the work front, so it’s time to start cleaning up the chaos at home.

Sunday, interrupted dinner, 18:00 Does it never end? Right during dinner, my phone rings and I have to restart another web server. I repeat my steps of the day before and save myself about half an hour this time. Knowing the road is really a lot easier than navigating it for the first time.

Lessons I take away

I had had eight incidents in one week time, which after analyzing have had 5 different root causes:

  1. Software going funky (*2). FrankJumper suddenly started dying after working well for over 2 years.
  2. Corrupt data. Bad data in your databases can cause really weird behavior.
  3. Network changes. The load balancer config did not work for our situation.
  4. Carelessness. Neither me nor Earthgoal hadn’t checked all of the sites, one of them had never gotten fixed.
  5. Unidentified (*3). We still don’t know exactly what caused our web server crashes. We’ve come a long way but we’re not there yet.

From these incidents, causes and my, sometimes poor, way of managing them I think I can learn the following. Many people have said these things, but sometimes you just have to feel the pain before you realize the value of good advice:

  1. Always make sure you can do an autopsy. Whenever you take action to restore your site to a working situation, make really sure that you have the data needed to figure out the problem later on.
  2. Automate or follow a script. A colleague of mine said it before, but now I realize it too: it pays off to have a script. When you get called in the morning you’re really not as focussed as normal, so you really want dummy instructions to save yourself from making mistakes. The best way is of course to have recovery automated, but that is not in all cases financially attractive.
  3. Don’t trust easily. Testing is important in development, but it is even more important on live situations. When you think you have the situation resolved, take some time to verify that it really is resolved.
  4. Rollbacks should be quick. When you’re really in trouble and you need to roll back your data or software, you really don’t want to be down for too long. Think about your rollback plan up front and make sure it’s fast. The first time FrankJumper was having problems, we had our biggest site down for about 6 hours. 😦

I hope my story helps others to see the value of this advice, at least I can now see why  🙂

Advertisements
Posted in development, operations | Leave a comment

The Hollow Hype around DevOps

Tonight was another gathering of the nlscrum crowd. The speaker today was someone I had seen present at devoxx before, Patrick Debois. I left his session at devoxx quite unsatisfied, so I figured I’d take the opportunity of this smaller group to see what his brainchild, DevOps, is really all about.

The presentation Patrick and his co-host gave left me unsatisfied yet again. Patrick is stating that there is a big problem between Developers and Operations. Dev always think that Ops is not flexible enough and Ops always say that Dev are cowboys. Admittedly, Patrick is a good presenter and brings a nice story. He tells a story from his development past about building an application that would occassionally crash and eventually burns to the ground. His point is that there is a big divide between how a developer thinks and how an operations person thinks. The differences come from the following I think:

Developers Operations
Focus on Change Stability
Are responsible for New Functionality Non Functional Requirements
Work in Iterations Flow
Use process agile ITIL

My problem is that in both his presentations Patrick only puts down a problem and the message is that we should change our mindset. We should embrace each other and learn to think alike. That’s really, really not tangible enough for me. I feel the problem since I am in a role as both Dev & Ops to an extent and want to get some good advice on how to combine the two better. So far, the 2 DevOps presentations did not deliver.

At the end of our meetup, we always have a lot of room for discussion. I put a topic on the board which was meant to lure Patrick and his co-host out and into the open. I titled it “The Hollow Phrase – DevOps”, hoping to get a good discussion going. It appears I was being a little too Dutch and straightforward so my fishing hook didn’t directly receive a bite. I managed to find Patrick anyway and opened up the discussion with him and a few others. It turns out, there are a few tangible things we can take away when we think of having Dev and Ops play nice.

Lessons for Developers

Focus on Non-Functionals

Developers should really have Non Functional Requirements in mind when building an application. This sounds like a real no-brainer, but in most projects and teams, important things like scalability and performance are only an afterthought.

There is however a practical way to make sure that this is considered core and also gets attention: make sure the product owner has Non-Functionals in his field of vision. Plenty of ways to do this right:

  • in smaller organizations, have the end-customer define what level of pain they are willing to take
  • in bigger organizations, make sure that an architect from the Operations department is a regular partner for the product owner

Share the responsibility

The problem gets worse when Developers are only responsible for delivering new functionality and Operations are only responsible for maintaining stability. Make sure both parties are responsible for changes and stability. Developers can also be held accountable for the number of incidents on their software after release. Operations can also be held accountable for the number of successful changes they manage to implement. In organizations with bonuses, it pays off to define the bonuses on both these levels.

Handling incidents in iterations

In organizations, such as Amazon (and Compare Group 😉 ), the developers are also responsible for the operation of the software they build. The problem you run into with this combination, is that your Sprint commitments are endangered the moment you run into an incident on your live environment. The solution is simple: assign incidents a value, just like the stories on your Sprint. This value can be in story points, in hours, whichever works for you. At any given time your commitment stands to deliver a certain number of value points in a Sprint. The only discussion you need to have is: what do we take out of the Sprint now that we have run into this. In my opinion, this should be a natural part of the negotiation between Product Owner and Scrummaster anyway, so use that mechanism.

Lessons for Operations

Use virtualization and cloud technology

New technologies such as virtualization can make an Operations team’s life a lot simpler. No longer do you have to be reliant on this one server, but in case of failure you can simply move your virtual machine somewhere else and be guaranteed it will work again. Also, this type of technology facilitates scaling a lot better, since it’s extremely simple to clone environments.

Infrastructure as code

Work on repeatable and durable solutions, stay away from quick one-time fixes. A good way to do this is to make automation a key part of your Operations. Setting up a server is something you can have a script do for you. Write the installation process once and then run it twenty times. Automation not only saves time in the short run, it also prevents a lot of errors in the long run because there are a lot fewer places for errors between chair and keyboard.

Empower the business

Delegate recurring change requests to your customer. Simple change requests still take up a lot of time of Operations teams, the customer is quite capable of doing them himself if he has the necessary tooling. In many cases you will need to make a user interface available for your customer, but after that he is in control. He won’t need to get in touch with your helpdesk anymore to get his password changed, he can just receive a new one in the (e)mail. This frees up your time as Operations to work on longer lasting changes.

Conclusion

DevOps has some abstract points. As Developers it is important to think about what your software will do in a production environment. As an organization, it is important to make sure that the responsibility of the Non Functional Requirements is not guarded solely by your Operations team.

There are also some concrete points to take away, but that requires peeling away some layers of the onion. That’s really a shame, since Patrick and both his co-hosts had good looking presentations that had obviously received a lot of work. Still, it was good fun to discuss this with Patrick and his co-host and I have every bit of respect for what he has accomplished so far.

Posted in agile, development, scrum | 7 Comments

Devoxx – day 3

My last day at Devoxx was the first day of the actual Conference. The mood is still superb and I really have spent every minute either talking to people or listening to sessions. I must say that this day felt a lot like a NLJug day, I kept running into old acquaintances and lost friends. Also, as I started up conversations with people I’d never met before, I was greeted with the same enthusiasm.

From the sessions and the discussions I’ve managed to pick up quite a few points to take home, so let me go over them shortly as I wrap up my final Devoxx day.

Java 7 and 8
For those of you who’ve read the reports from JavaOne, skip this paragraph. Mark Reinholdt mentioned nothing new, except for one thing which caused me to laugh. I’ll get to that in the end.

The Java 7 release has been postponed, funny enough to my birthday next year, 28th of July. Also it has been downscaled significantly, some of the really big features that were announced earlier have been postponed until java 8. What they did put in, is really not that interesting to a Java Developer. Mark Reinholdt named it a minor release, but someone on Twitter said it right: it doesn’t deserve the title Java 7, it should be Java 6.1. For a full feature list, check out:

http://openjdk.java.net/projects/jdk7/features/

The Java 8 release holds a lot more promise. Major features will include Project Lambda, which adds closure type functionality to the Java language and Project Jigsaw, which is supposed to take away all of our classpath issues by providing a module system.

Let’s start with the last, the module system. Basically, the java VM will support functionality similar to Maven’s dependency management. It’s a good idea, integrate the good stuff from the community into the core. Nuff said.

The thing everybody is pining over and would really like to see realized is the addition of Closures to Java. There are many languages already supporting closures, such as Javascript and Scala, but it would be really cool to see it in Java as well. Interested by all this, I attended Brian Goetz’s presentation on Project Lambda. He had a very clear presentation where he explained how and why they will be implementing closures.

The main reason Oracle is pushing this is because of parallelization. Systems are getting more and more cores, but they’re not getting any faster. In order to speed up programs, we need to go parallel. Doing a loop that iterates over all elements in a collection is serial by design. If you tell the collection to run a function for each of it’s elements, the collection can decide to fork that off into different threads and do them in parallel.

I can’t give you all the details because that would take me an hour as well, but I can mention that it’s worth looking it up on parleys.com. You can also try googling for it since I’m sure it’s not the first time he gave the presentation. I’ll give a quick teaser.

This Java code:

Collections.sort(people, new Comparator<Person>() {
public int compare(Person a, Person b) {
return a.getLastName().compareTo(y.getLastName());
}
}

can be rewritten as:

people.sortBy(#Person.getLastName)

Finally, the slide that really made me laugh is the one where Mark Reinholdt announced the submission of several JSRs for Project Lambda, Project Jigsaw and the features in JDK7. Oracle says they’re taking the Java Community Process seriously, but really they’re only submitting JSRs after most of the work has already been done. A really good way to give the community a say in what goes on Oracle, thanks!

Java EE 6
As a JSF, Seam and general Java EE developer I of course had to attend some EE sessions to get an idea of what Java EE 6 will bring us. I’m not going to be attending the official keynote and the sessions about EE tomorrow unfortunately. I don’t want to go into speculation, so I’ll stick to what I found out.

CDI
The Java EE 6 container will support Dependency Injection, yay! It won’t be all over the board like in Spring, Seam or the likes, but it’s a good start. I saw some nice demos going on with @Inject variables and it looks like the need for Spring or Seam is diminishing. We’re not there yet by any means, so you’re stuck with your trusted helpers for quite a while more.

JAX-RS
The new Java EE 6 spec has integrated the JAX-RS standard. This means that you will be able to make your beans available through URLs. It also adds JAXB, which allows you to put an annotation on a domain class and have JAXB transform that into XML or JSON for you.

JSF 2.0
The JSF 2.0 is righting some of the wrongs that the JSF 1.0 standard introduced. I’m still not a fan, but there are some good improvements. Facelets and the use of EL will become the default way of writing your pages, I really wouldn’t know how to write a JSF app without it.

Some small take-aways
Git
The version control system Git seems mildly interesting. The killer feature for me was the ability to do a “git grep”, which allows you to search through all of the history of your repo for a specific string and where it was changed.

Performance tuning
Joshua Bloch had an interesting but short presentation on performance. His main point: performance is no longer predictable. As software and hardware systems advance and get more complex, you can’t say that something will take x seconds unless you actually measure that. His advice:

  • Measure your overall performance
  • Measure many times to make your measurements statistically significant
  • Trust your libraries to do optimizations for you
  • Learn to live with some unpredictability

Devoxx conclusion
I’ve had a few organizational annoyances, but have really enjoyed the atmosphere, the technical content and the discussions with people. Overall the experience has been quite positive, I would recommend it to others.

It’s been great Devoxx, see you next year!

Posted in development, devoxx | Leave a comment

A good Devoxx day

Wow, it has been quite a day! I’ve attended quite a few interesting sessions, but most of all I met a lot of interesting people. I have the habit of starting up a conversation with the people I meet at a conference, but Devoxx makes it quite easy.

The downfall of java EE
I went to post my blog and charge my laptop and found myself in the company of the guys who presented Spring Roo yesterday. We had a nice conversation about web development and Ben Alex from SpringSource opened my eyes to something. I’m still evaluating, but at least it triggered me. His statement is that all the development that is being done at the moment for java EE is doomed. His argumentation starts with the statement that the future is in the cloud. Of course Ben is a VMWare employee, so you might say he has an interest, but global sentiment is proving him right in his cloud statement so far. He goes on to state that the big application servers introduces an enormous overhead concerning startup and memory usage of applications. Cloud vendors want applications to be lightweight so they can start them on usage and decommission them when there is no demand. These 2 statements directly conflict each other, hence the likelihood of the demise of java EE.

As I’m writing this I’m giving this some more thought and although it sounds plausible, I’m not sure I agree. I have to agree that the number of companies that I know of that split up the runtime of their EJBs from their webapps is limited to say the least, so there is definitely a lot of waste in this spec. I also agree that there are things that need definite and immediate improvement, such as the further introduction of convention over configuration.

Where I hope he will be wrong, is that I hope the java EE spec will evolve into something that adopts all best practices that the people from SpringSource, wicket, restlet etc introduce. The spec should not lead like it tried to with EJB and JSF, but follow and complete good ideas like the W3C HTML4 and CSS2 standards.

Furthermore, there really isn’t that big a difference from having the SpringSource folks trying to optimize our runtime than there is in having Oracle or IBM trying to do the same. It’s all a matter of motivation and that usually stems from money lost or gained. As the cloud field progresses, the JavaEE app servers are going to have to step up and do their jobs. As they foresee their revenues dwindling, they will gain the motivation to do just what is needed.

I’m hopeful we’re not on a dead end by implementing Java EE 6, but only time will tell.

What good is open source?
Another interesting talk was at the Quick on a stop for a hamburger. I bumped into the guy I just bashed on twitter for presenting his web framework. I still stand to my statement that there are already way too many of these, but he, Marc Portier, had an interesting point on this. At his company they open sourced the web application framework they developed in-house. Of course the wisdom of writing your own web framework can be disputed, even in 2007 there were plenty of web frameworks available. Still, they took their labour and open sourced it and then continued working with and on it. They are applying “eat your own dog food” to the extreme while at the same time they are giving their customers the guarantee that the apps they write will always be maintainable. I doubt that they will ever find their web framework, Kauri, the talk of the town. Nonetheless, they do get customers through the sales point of being open source on occasion, so Open Sourcing is bringing them something. I guess most Open Source projects start from a particular itch a developer wants to scratch, but this goes to show that even if your project does not skyrocket, you can still benefit from being Open Source

The Java Posse
Finally, I got the chance today to meet and thank Dick Wall of the Java Posse. They have been bringing me the news I need for my development work, so it was great to shake his hand and thank him.

Posted in development, devoxx | 2 Comments

Scala – by a dummy

I started the second day by deciding to go Hands On for a day and attend some of the Labs sessions. The first session today was about Scala, it was unfortunately again riddled with some organizational problems. We were supposed to be doing a hands-on-lab, but there was no internet in the room in order to pick up the examples and slides, so we had to make do. The room was a blazing furnace when we started, but that got sorted out after a while luckily. By that time I had however gotten quite uncomfortable on the little foldout chair that I was sitting in, so it felt like going from rain to poor really.

The 2 presenters of this University session, Dick Wall and Bill Vennes, were however making the most of the situation. They dove straight into the code, but showed everything they were doing in the runtime environment of Scala. I had gotten that to run before, so I was to some extent just typing out all of the examples they were giving that were demonstrating. I must say, it’s a very interesting experience to be able to just type some lines of code, execute them and see what the result is. If you question what you’re doing, you just type it in “repl” and you see whether your thoughts were correct.

Unfortunately I can’t share 3 hours worth of presentation in one post, so I won’t even try. Let me just share some of the things I found interesting about Scala.

First off, Scala allows you to write code that is almost Java in syntax and then allows you to shorten and clean that based on whether you think it looks good enough. There are so many shorthand ways of doing thing that you can actually write really clean and concise code. You do have to get into the Scala mental model before you can do that comfortably though. You have to be able to read the various operators that Scala supports in order to judge that. Dick said that in his head he had words for each of the operators he used, so that he could mentally translate the code into sentences and then judge if the sentences read well. You will have some startup work to do, but it looks like getting your first code up and running as a solid Java Developer won’t take you more than a few days. I’ve heard varying opinions, but people generally seem to take somewhere between 1-3 months before they’re as productive as they were in Java.

Next, there are some pretty damned cool features in the language. A good example is the existence of traits, which basically allows a class to inherit from more than one other class. To call it traits really says it all, you can add multiple traits to a class and make it more and more functional. Good stuff! Another example is the enormous extension that the language offers to the Java-switch concept. You can do almost anything with the match-statement as Scala calls it. These features really seem to allow you to get more done in less time, so that’s a definite plus.

Conclusion
It appears that Scala brings a lot of power and allows you to write code that is more readable than Java code. It really enables you to write DRY-code, but also allows you to write code that is completely unreadable like some PERL programs I’ve seen. I find that to be both appealing and quite scary at the same time, so I’m still on the fence as to whether I really should start using Scala.

The best thing I took away from the session is that it’s quite possible to make a gradual switch and turn back if you don’t really like what you find. A good suggestion by the hosts was to start by writing Tests in Scala for production code that you write in Java. This allows for a trial period and allows you to stay out of production code. It’s also possible to write jar files in Scala and use them in Java. You would write your interfaces in Java and then implement those in Scala code. This would allow you to do only segments of a system in Scala and allow other places to stay in familiar old Java (or Groovy, or Jython, or whatever)

Posted in development, devoxx | Leave a comment

Devoxx Day 1

The first day of Devoxx 2010 for me boils down to a nice comparison between the JBoss and SpringSource camps. I attended 2 different 3 hour sessions today:

⁃    Seam 3 State of the union
⁃    Increasing Developer productivity with Spring Roo

To be simple, my comparison boils down to this:

The folks at JBoss can make something complex more complex and can make an overly complex standard somewhat manageable. The folks at Spring Source make a Developer’s life easier, get out of the way if you want them to but completely ignore any standards.

I’ll compare the two presentations both on content and on presentation.

The presentations
The start of the day had the session about Seam 3. At our company we’ve been programming with Seam 2 for more than 2 years, so I am somewhat familiar with the toolset. I’m not up to speed on the latest specs and techs, so a State of the Union approach sounded ideal. It turned out to be less than that.

The 2 presenters started up fast and went through a lot of material immensely rapidly. They were going through the Seam modules really quickly, but were not doing it at an introduction level, but at an advanced in-depth knowledge level. I’ve worked with Seam quite a bit, but unfortunately I was lost on several occasions. The only real and interesting demo that we were given was to generate a Seam application, but that didn’t run.

The Spring talk was an entirely different matter. It was completely riddled with real code running and real development being done on stage. The presenters told us what they were going to do up-front, had a few slides about their topic and then showed us a nice working demo of the things they just showed. They had quite a few technical difficulties, but those were all on account of the organization of Devoxx. In the end, they managed to present things in an interesting fashion.

The heart of the matter – Seam
After attending quite a few conferences and seminars in my presentations I have had to learn to read through the style of a presentation and judge the content itself. The Seam folks have given us a way to make the JSF specification somewhat manageable in the past and it was quite important for me to get a hold of their State of the union. We will want to switch to their latest tools in the near future. I take quite a few interesting and some quite boring points home from their presentation.

SeamFaces will keep helping us out where the JSF 2.0 standard still hasn’t managed to plug all of the holes that JSF 1.0 introduced. Dependency Injection and Expression Language can be used in all of the places you would really expect. Running actions when a view is run is still possible and they managed to find a good way to integrate POST-REDIRECT-GET working with JSF. I’m glad for their hard work on this, but the main question of course is: Is there nobody with a brain on the JSF specification board? A very good tip from Dan was to have a look at PrettyFaces if you are tired of all of the plumbing code that you need for JSF. Will definitely have a look at that then!

The second interesting addition seems to be SeamCatch. Basically this is a good way to handle exceptions throughout your application. It allows you to declare globally what Exceptions to catch and then what to do with those. It also automatically unwraps the Exceptions so that you can directly get to the core of the problem. This promises to make our log files 25-fold smaller in case of a Database failure or something of the kind. Good stuff!

Of course it wouldn’t be a JBoss presentation if they didn’t present something that I am completely baffled by. They showed off a way to making all of your logging a lot more kludgy and a lot more complex. Their assumption is that you will be reusing your log messages a lot throughout your code and that it really needs a type-safe framework. The presenters showed us how to add 5 lines of code to every log line, like logging wasn’t something that already takes way too much time anyway. Thanks for that guys, get a real job I would say.

There was promise in one last part of their presentation, but that remains a promise and is still under heavy development. The Seam folks have realized that SeamGen really doesn’t cut it and are working on it’s successor, SeamForge. It looked interesting, but the demo never made it any farther than creating some domain classes. The actual user interface never ran and of course the usual comment was added “it usually works fine. Really, trust me!”

The heart of the matter – Spring Roo
The Spring guys of course had a big advantage, they only had to demo their version of code generation, Spring Roo. They didn’t need to introduce Spring MVC, Spring Web Flow or any of that, since they had cut the presentation down to just Spring Roo. It turns out that that makes for far more interesting listening.

The presentation was good, but the content of it was very interesting as well. Spring Roo is a code generation framework that allows you to create Spring MVC applications from a database schema or a domain model. This sounds good, but there are so many crap implementations that I wasn’t quickly impressed. It looks like they have quite a few of the details covered and I was happy to learn that Spring Roo:

⁃    Supports customization of the user interface layer in a smart way
⁃    Allows you to regenerate your application once your db schema changes and they promise to keep everything intact

The thing with the Spring guys is of course their complete disregard of standards. Sure you could theoretically implement a JSF user interface layer add on for Spring Roo, but let’s not kid ourselves: that will never make it. The application runs in a servlet container, so things like EJBs are not a concept that will see the light of day in a Spring Roo release soon. Is that a bad thing? I’m not sure. They scared the hell out of me a few years back when they were suddenly going closed source, but that all has turned out quite all right. They make great stuff, but I’m on the fence.

Conclusion
Anyway, Spring Roo sounds like something I must check out, I guess I’m spending a few evenings coding in the near future. Seam and in particular SeamFaces and SeamCatch I will also have to dive into, it will make my day job easier if we can upgrade to this new version of Seam it looks like. SeamForge I am not so sure about.

I have one more session today, about Mylyn, but I must say, I am really looking forward to tomorrow. I hope to learn a lot more about the Java EE 6 spec. I am really behind and I know it, so maybe some of the stuff from the Seam presentation will land better once I’ve seen that.

Posted in development, devoxx | 5 Comments