Wednesday, March 28, 2012

Time Planning - Simple is Good!

I have been through the gamut of day planners and professional time planners.  Like many software implementations - they tend to be overly complex.  In fact, too complex to try to use it when you're busy and too inflexible to adapt to your environment.

(I learned a bit from David, Matt, and Ben at OpenNMS as they  are masters at refactoring and simplifying software.)

There are certain inevitabilities that arise in every Techies world:


  • Always way more work than can be humanly doable given the current resources.
  • No additional resources are coming to your rescue.
  • Always interruptions and priority situations.
  • Everything is an emergency.
  • You tend to deal with micro-management on occasion
  • Priorities change ad nauseum.


The goals - To track what you need to do as it gets done. Adapt to any new tasks and priorities.  Produce concise status reports on a moments notice without missing anything.

What I wanted was something simple and doable, and lets me focus on work versus the planner.  Here's what I came up with:

On your first day of the work week, take some time to make up a list of things that need to be done. This list doesn't have to be only for tasks this week either. You can document anything here that you consider a task.

From the list, put a task / objective/goal on a sticky note. Make them one task per note.

Prioritize the list. Stack them in the order you think is appropriate.

Now, stick the list to your bulletin board in your cube.  This is the TO DO stack.

Begin working each task. For each thing you are currently working on, pull if off of the TO DO stack and put it in a "WORKING" stack. As you complete tasks, put them in a DONE Stack.

Lets say Wednesday rolls around and you get a call to diagnose a problem or do some investigation work for a post mortem. Do a quick note on the task and ticket # and put it in your WORKING stack. As you finish, put in your DONE stack.

On Friday afternoon, when its all said and done, list your DONE tasks in you weekly status.  Make sure you list the things you're still working on from the WORKING stack and things to be done from the TO DO stack!

Design by Glossy

How many folks do you know that live in a world of Design By Glossy?  Inherently, some Director or VP goes to Interop or goes golfing with a bunch of Sales Weasels and ends up bringing in products without the first requirement documented and vetted.  Now,all of a sudden, the implementation is on the schedule, its in your bucket list, and you are still trying to figure out what  problem this thing solves.

I cannot count how many times I've seen this and it is a travesty.  It is devastating on morale, devastating on the services IT provides, and it is more disruptive than helpful.  Many times, you end up with Shelfware, dumpware, or junkware and crippleware.  Here is a list of damages these decisions cause:

1. Without requirements, how do you do an effective test and acceptance? What criteria do you use? The applications glossy sheet?  How smart is that?

2. Has the workload been examined, vetted, and adjusted?  All too often, products are piled up on other products such that the person cannot devote enough time to get the implementation close. Do you sacrifice people because of the new app and not even know it?

3. What has been or is being sold to the customer? Many times the guy doing the selling has never seen it work. Now, someone else is responsible for making things work when even the vendor cannot respond.

4. Who is responsible? Is it the buyer?  Or is it the Implementer? Who is the Chicken and who is the pig?  Remember the pig has more to lose in the game than the chicken. Be fair.

Here are a few bad things that typically occur:

The guy sticks around for a couple of years, then bolts, leaving a trail of tears. Thats OK.  They going on to the next gig thinking that if only somebody had implemented what I said, it would have been perfect.

Somebody somewhere in the organization is made a scapegoat for each one of these fiascos. The DBG person cannot be responsible. They claim they are the Sales person or the Director/VP at that point.  Its up to someone else to be responsible.

After going through a couple of these, there are people that consider a change in careers. It is so demoralizing that some even have to seek counselling. Many others end up hating what they do.  Others tend to get as close as possible to the DBG in hopes that they become favored and not F@#$ed.(Kind of a behavior see in child and domestic abuse cases where the victim gets as close to the perp as possible to avoid more abuse.)

The DBG person becomes a Troll Magnet for Sales people. They attract sales people because they have a history of foolish purchasing.  No requirements. No POCs. Just run and Gun POs. Heres the check! It is almost the perfect storm for sales.

They develop a history or legacy for technical people to be wary of.  Strong technical people tend to push back. And when they do, the DBG person will resort to OSI Layer 8 - the Political Layer.Without a grounding in solid technical merit, their DBG choices have to resort to non-technical criteria.

Conclusion - This is the danger of politicians becoming elevated into leadership roles in technical environments without paying their dues.  They think they understand engineering and engineering concepts, but they only pay lip service.  Even those with Engineering or Comp Sci backgrounds will do the same thing when they have not evolved technically over time. To them, its more important on who you know than it is for a product to actually solve some problems.

Any sign of a DBG in charge can cost the organization a significant amount of money, time, and people.

Saturday, March 24, 2012

ENMS Integration - Scripting versus OOTB

The other day I was discussing integration work between products and how that takes a lot of scripting, data munging, and transformations to make an ENMS architecture work.  In the course of the discussion, I was taken aback a bit when asked about supporting a bunch of scripts.  Definitely caught me off guard.

Part of the thing that caught me off guard was that some folks believe that products integrate tightly together without glueware and scripting. In fact, I got the impression that the products they had did enough out of the box for them.

So, why do you script?

To integrate products, tools, and technology.  But most of all, INFORMATION. Scripts enable you to plumb data flow from product to product and feature to feature.

Think about CGI tools, forms, and ASP pages. All scripting.  Even the Javascript inside of your HTML that interfaces with Dojo libs on a server... SCRIPT.

Think about the ETL tasks you have that grab data out of one application and fit it or compare it to data sets in other applications.

Think about all of those mundane reports that you do. The configuration data.  The performance data.  The post mortems you do.

Rules of Thumb

1. There is no such thing as a temporary or disposable script.  Scripts begin life as something simple and linear and end up living far longer than one would ever think.

2.There will never be time to document a script after you put it in place.  You have to document as you go.  In fact, I really like to leave notes and design considerations within the script.

3. You have to assume that sooner or later, someone else will need to maintain your script. You have to document egress and ingress points, expansion capabilities, and integrate in test cases.

4. Assume that portions of your code may be usable by others.  Work to make things modular, reusable, extensible, and portable.  Probably 70% of all scripting done by System Administrators is done initially by reviewing someone else's code. Given this, you should strive to set the example.

Things I like in Scripting

perldoc- Perldoc is the stuff.  Document your code inside of your own code.  Your own module. Your script.

perl -MCPAN -e shell   Getting modules to perform things - PRICELESS!

Templates.  You need to build and use templates when developing code.  For each function/ sub-routine/ Code Block / Whatever -You need to have documentation, test cases, logging, debugging, and return codes. Ultimately, it leads to much better consistency across the board. And Code Reviews get guaged around the template AND the functionality.

Control ports - In long running or daemon processes, control ports save your Butt!

getopt is your friend!!!

STDERR is awesome for logging errors.

POE - POE lets you organize your code into callbacks and subroutines around an event loop.

/usr/bin/logger is AWESOME! I have used the LOCAL0 facility as an impromptu message bus as many apps only log to LOCAL1-7.

Data::Dumper --  Nuff said!!!

Date::Manip-- If you are dealing with date and time transformations, Date::Manip is your ace in the hole. It can translate last week from a "string" to a to and from date - time stamp and even on to a Unix time value.

Spreadsheet::WriteExcel --I love this module! Lets me build Excel spreadsheets on the fly including multiple sheets, formulas, lookup tables and even charts and graphs. And using an .xls fie extension, most browsers know how to handle them.  And EVERYONE knows how to work through a spreadsheet!

ENMS products have a lot of scripting capabilities.  Check out Impact. HP OO. BMC RBA. Logmatrix NerveCenter. Ionix SMARTs. The list goes on and on.

Bottom line - If you have integration work to do, you will need to script. Could be perl, shell, python,or whatever. The products just don't have enough cross product functionality to fit themselves out of the box.  In fact, there are several products that embrace scripting and scripting capabilities out of the box. Even products within the same product line will require scripting and glueware when you really start using the products. After all -> YOU ARE FITTING INFORMATION.