Datacenter Confidential #10?..

Yes, let's call this one ten. Nine needs a rewrite anyway.

I just spent 24 of the last 48 hours racking and stacking since one of my hosting companies decided they needed to move me.

I'm venting toward the middle of the room since I'm pretty sure that they are 4+1 CR/AC units short of what they will need if they fill their room up with servers (not to mention the power, but that's another issue -- I have three extra 20amp circuits for an insanely cheap $150/mo, so I'm good to go and fuck everyone else, haw haw). Also, 80amps per cabinet is twice as much as the hosting company upstairs lets me get away with, although, as I said before, the lack of air condition and UPS may just bite me later -- nothing too important down there though.

So, here's how you know you have been working too long on racking and stacking: when, at 5am, you drop a 1u grid machine (don't worry, the thing had a value in today's dollars of probably $350, but it cost $1700 back in 2005 when we bought it -- rest in peace).

Power, space, cooling.. first principals. Next week I have to find space, power and cooling for 18 new machines (including another SAN tray). I have ROR for space adjacent to my 5 upstairs racks, but they aren't sure they can give me two 20amp circuits. *FROWNY FACE*. I saw someone carting out three dozen Dell's last week, there better fucking be power. I hope we don't have to go downstairs, because I will have to go with something cheaper for SAN, like a sucky commodity 3WARE on-board JBOD.

Oh, also, I wanted to return to the idea of mise en place when talking about jobs like this.

At minimum, each machine getting racked will need two rails, measured to length, 4 square nuts with threaded screw-holes, and four screws. For the thirty-five or so machines I moved, that's 70 rails, 140 nuts, 140 screws. How does one divide up this tedious labor (the most tedious of which, in my opinion, is adjusting the rail lengths to the horizontal length of the front and back risers on the rack, left and right)? By 5am this morning, I had devised the following attack strategy:

Working 5 machines at a time (5 boxes, 10 rails, 20 nuts, 20 screws), I marked off and installed the nuts, measured (or had pre-measured) a length template for the rail, adjusted all the rails (not including the template) and hand installed each rail, screwing in the screws with my fingers and tightening or adjusting them with a screwdriver when the machines slide into the rails.

Then I would drop the power (because my retarded hosting company put the power receptacles on top of the racks as opposed to in them, on the bottom), bundled 5 power cables at a time, on the left, and I would pull and drop the cat5 on the right. Cat5 goes in first, then power, then a KVM to verify that (a) I'm in the right NIC (all hosts were dual NIC) and (b) that the machine actually boots.

Which brings me to admin pet peeve #272 -- there is never, EVER, *EVER* any reason to boot into runlevel >3. I'm talking to YOU, Lunix lusers.

A datacenter grid machine or server NEVER EVER EVER needs some fucking gimpy stupid fucking douchebag GUI. Its in a RACK, its not a DESKTOP, idiot. So, to the previous admins who initially installed these machines, I say to you, thanks, thanks for making me edit /etc/inittab to undo your stupid runlevel 5 moronocity. On a dozen machines!

Next week: software RAID, stupid or just mean-spirited?

Comments

Popular posts from this blog

From the DC Madam to Late Night Shots

Anniversary