Already receiving interesting emails about the network rollout.
I've got an email here that says, basically, "what the hell makes you better than the 'IT outsourcing companies' you want to sue - you're a stuck up b*tch with an ego the size of the USA continent".
Another message: "Am I the only one who thinks that building a network of this size should not take so long".
Fine, you build a network with half a dozen mission critical applications that need to be run in a local installation *and* terminal services environments with minimal documentation, and an SBS server that is badly built, not using wizards, and see how you go.
I'll tell you what makes me and my miracle workers better than the companies that were on site before us. SKILL, ETHICS, INTEGRITY and the balls to stand up and speak honestly about the real situation on a network. No more, no less.
There was a Cisco switch that was manageable but unmanaged.
When I was employed the Exchange server was within days of being shut down because it was so close to its 16 Gig database size limit.
The antivirus protection had not been updating for months because the antivirus product, and SQL were set to use the same port.
There were severe performance issues on the network - maybe that was because of the *hubs* in the switch cabinet, and the 10Mbps hubs that were being used to share single network ports between at least 3 pieces of hardware .. yes, that's right 10Mbps - in a terminal services environment ... what the hell use are 100Mbps or 1000Mbps network cards in computers if they're plugged into 10Mbps hubs?
There were 25 users, but only 10 user licences for mission critical software.
Other mission critical software was run on a terminal server using a single user home licence...
The terminal server was a cheap and nasty white box. How did the previous IT company manage an imminent RAID failure? They swapped drives between different bays and hoped for the best... <jeez, where do they find these people???> Me? I told the powers that they were at risk of imminent hardware failure and please, spend some money - they listened.
The tape backups had been failing for *4* months when I started because my predecessors had plugged the tape drive into a RAID controller installer instead of a SCSI card. Me? 5 minutes of googling and I had a cause and fix.. you explain to me why the previous IT companies could not do that.
When the RAID hardware on a mission critical terminal server started throwing up unrecovereable errors they tried to tell me nothing was wrong and that the error was a false flag... I argued against that diagnosis, to no effect... within weeks the RAID suffered a catastrophic failure... bye bye server. I hate to say "I told you so", but "I TOLD YOU SO!!!". Well, at least when I speak nowadays people listen.
I can give you a myriad other reasons why my miracle workers are better than those who went before, but it boils down to this. Companies trust us (IT support providers) to guide them, advise them, and speak honestly to them when bad decisions are being made. You can either go the cheap path, the path ordered by those without the IT skills to understand the consequences, or you can stand up and say "you are wrong.. this will happen".
The willingness to stand up and say "you are wrong this bad thing will happen" is what separates my miracle workers from the crowd of IT providers who look no further than their monthly payment cheque. For example, I am not willing to accept that a 40 Gig tape drive is sufficient to back up a 160 GIG database... you either purchase a larger tape drive or I walk. Its that simple. I am not going to be the bunny who is feeding backup tapes into a drive all day, and I am not the bunny who is going to try and use those split tapes to recreate a network.