The other day I was discussing integration work between products and how that takes a lot of scripting, data munging, and transformations to make an ENMS architecture work. In the course of the discussion, I was taken aback a bit when asked about supporting a bunch of scripts. Definitely caught me off guard.
Part of the thing that caught me off guard was that some folks believe that products integrate tightly together without glueware and scripting. In fact, I got the impression that the products they had did enough out of the box for them.
So, why do you script?
To integrate products, tools, and technology. But most of all, INFORMATION. Scripts enable you to plumb data flow from product to product and feature to feature.
Think about the ETL tasks you have that grab data out of one application and fit it or compare it to data sets in other applications.
Think about all of those mundane reports that you do. The configuration data. The performance data. The post mortems you do.
Rules of Thumb
1. There is no such thing as a temporary or disposable script. Scripts begin life as something simple and linear and end up living far longer than one would ever think.
2.There will never be time to document a script after you put it in place. You have to document as you go. In fact, I really like to leave notes and design considerations within the script.
3. You have to assume that sooner or later, someone else will need to maintain your script. You have to document egress and ingress points, expansion capabilities, and integrate in test cases.
4. Assume that portions of your code may be usable by others. Work to make things modular, reusable, extensible, and portable. Probably 70% of all scripting done by System Administrators is done initially by reviewing someone else's code. Given this, you should strive to set the example.
Things I like in Scripting
perldoc- Perldoc is the stuff. Document your code inside of your own code. Your own module. Your script.
perl -MCPAN -e shell Getting modules to perform things - PRICELESS!
Templates. You need to build and use templates when developing code. For each function/ sub-routine/ Code Block / Whatever -You need to have documentation, test cases, logging, debugging, and return codes. Ultimately, it leads to much better consistency across the board. And Code Reviews get guaged around the template AND the functionality.
Control ports - In long running or daemon processes, control ports save your Butt!
getopt is your friend!!!
STDERR is awesome for logging errors.
POE - POE lets you organize your code into callbacks and subroutines around an event loop.
/usr/bin/logger is AWESOME! I have used the LOCAL0 facility as an impromptu message bus as many apps only log to LOCAL1-7.
Data::Dumper -- Nuff said!!!
Date::Manip-- If you are dealing with date and time transformations, Date::Manip is your ace in the hole. It can translate last week from a "string" to a to and from date - time stamp and even on to a Unix time value.
Spreadsheet::WriteExcel --I love this module! Lets me build Excel spreadsheets on the fly including multiple sheets, formulas, lookup tables and even charts and graphs. And using an .xls fie extension, most browsers know how to handle them. And EVERYONE knows how to work through a spreadsheet!
ENMS products have a lot of scripting capabilities. Check out Impact. HP OO. BMC RBA. Logmatrix NerveCenter. Ionix SMARTs. The list goes on and on.
Bottom line - If you have integration work to do, you will need to script. Could be perl, shell, python,or whatever. The products just don't have enough cross product functionality to fit themselves out of the box. In fact, there are several products that embrace scripting and scripting capabilities out of the box. Even products within the same product line will require scripting and glueware when you really start using the products. After all -> YOU ARE FITTING INFORMATION.