Sunday, September 3, 2017

Project Alphaberry - Part 6 - The hard parts

Can you believe that yet another few months have passed? A lot of what I planned on working on in Part 5 has been working out really well, but in this post I will be discussing what went well, what isn’t going well, and where the project is at today.

First, let’s get the bad out of the way, picking the operating systems. This turned out to be a complete nightmare that I didn’t intend on. I started simple, Rasbian is just Debian Jessie at its core, so why not use Debian Jessie on the Alpha? Well, Debian is like the operating system for power users I find out, its all kinds of minimal, and if you want any kinds of third party support for hardware, you gotta put it in the install media. That was fine for the networking devices, but when it came to the video card, my nightmares started. The Alienware Alpha R1 has an internal embedded Nvidia chip-set, this wouldn’t have been a huge issue, but it is the only video output on the system (it also has a video input, which was interesting). I could get Debian installed, but on the first boot into the OS, it would just hang on trying to load the display drivers. Research proved that Nvidia and Debian are not super friendly, and the built in drivers are pretty much horrible for the card, but I couldn’t get to a bootable point to actually install the Nvidia drivers. That is when I decided that running Debian on the Alpha was a non-option, I don’t want it to be a huge headache to get the OS built up from scratch, in the case where I want to rebuild the cluster.

That’s about the time I decided to go onto trying out Ubuntu. That worked flawlessly and quickly on the Alpha, I started with just Ubuntu server rather than Core, because I didn’t want to have to deal with creating a Canonical account just to install the OS. With Ubuntu on the Alpha, I figured hey, why not put Ubuntu Server on the PI’s? Dear lord, that was yet another rabbit hole that took me weeks to just give up on. There is a stable Ubuntu Server build for RPI2, but not RPI3, the RPI3 distro is community driven, and if you apt-get upgrade, it pretty much bricks the PI requiring that you rebuild the OS again. However, there was Ubuntu Mate for RPI3, and that worked without issues. I got all clever, and decided to take that Ubuntu Mate image, then strip everything “UI” out of the OS. That worked, and when completed, I had a working 32GB image that I could flash and repeat. However, it is 32GB image. I have yet to work with shrinking images, but I really just wanted something easier.

At this point, I just gave up, I don’t need the same OS on the Alpha and the PI, it isn’t necessary, and honestly, is a huge pain in the ass. I ended up deciding on using Ubuntu Server for the Alpha and HypriotOS for the PIs. HypriotOS was perfect, it is a super simple drop in image already pre-configured with Docker. They keep it pretty up to date with somewhat up to date Docker version (at this time they have images for 17.05). What I really appreciate is the tooling surrounding the OS. They have a really nice tool for burning images as well as a device-init system with some basics (hostname, wireless setup) that is kind of a bit like cloud-init, which I am somewhat familiar with. A simple yaml file in the /boot drive and you can easily set the hostname. Better than that, its open source and written in golang, and I am actually going to start doing some contributions to it, to add some additional features, such as setting some more settings on the OS and adding a way to join an existing docker swarm (via a swarm key/address combination).

While figuring out all of this, I have been dealing with the design for how all of these little credit card computers are going to be mounted and powered without being a huge mess of wires everywhere. I have a notebook where I have been doing scratch drawing of mounting plates and an enclosure for the system since the beginning of the project. Turns out, mounting and powering 24 Raspberry Pi’s, 5 network switches, a firewall, and essentially a fully fledged PC inside a little case, is kind of hard.

Starting with power. I didn’t want a powerstrip, and crap loads of plugs, I wanted a single power cable, that meant that I needed a power supply that was capable of driving this entire project. After a LOT of research, I found the Corsair RM650 The primary part of this power supply that sticks out above the rest, is the 25 amps on the 5v power. At load, with just the CPU/Networking/Memory in use on the PI’s, they draw about 700-800mA. The network switches (which are 5v) pull between 300-400mA under load, the gigabit network switch pulls about 300-400mA under heavy load, and the pfSense firewall pulls under 300mA under load. Doing some poor person math here, that’s less that 25A overall. It’s not perfect, but it will work. It also frees me up with about 54A on the 12V and another 25A on the 3.3V. That’s plenty of power for the project. One of the next issues was, the Alpha uses a 19V laptop style power supply, so I needed to get a power converter for 12V to 19V, that wasn’t super hard to find, but took about two weeks to ship from China.

Because the power is coming from the ATX power supply, I could utilize the GPIO pins instead of the Micro USB due to the clean power coming through. But now, I needed a way to break out the power from the 4 pin Molex connectors to the 2 pins on the PIs, but I didn’t want a single Molex for each Pi. I was able to find these little boards that take in a 4 pin Molex and have six 3 pin connectors, meant to be a Fan Splitter. One core flaw with the splitter though, is that it is intended to split the 12v power. With some minor soldering, I will just turn the Molex around on the board and have it split out the 5v power instead. That way I never accidentally try to send 12v into the board. I was able to find some 3 pin jumper cables at Fry’s that fit perfectly into the PCB and plugged into the Rasberry PI pins correctly. Powering the switches and firewall comes down to just getting some barrel connectors and putting a Molex connector on the other side. I still haven’t wired these up, but I may just use the 4 pin 5v connector out of the Powersupply for these. Finally, to turn the thing on and off, because ATX Power Supplies have a jumper to power on, I picked up an ATX On/Off 24 pin adapter.

While the power was a huge time sink, another part of the project is still an issue. Where to mount all this stuff, and how… This is where things seemed easy, but it became a horrible game of Tetris, especially with the way the PI’s are laid out on the PCB, still wanting to get access to the SD cards, and finally thinking about some sort of cooling for this cluster of servers (4 140mm Noctual PWM will be used in addition to heatsinks on all Pis). One of the biggest reasons I am not farther along on this project, is I don’t have anything to mount this stuff to yet to actually get the software portions working.

Let’s talk about my almost success, before I move on to what is currently going on with the case design. I got super excited about the Cooler Master HAF XB II EVO case, on paper it looked to be the perfect size for the project. It had the clearance for all the PI’s in the top part where the motherboard tray is, and there was a nice spot where the drive cages are underneath for the required 200x200mm Alpha, it seemed so perfect. But, after purchasing it, ripping the guts out, and starting some test fits, it did fit perfectly, if I didn’t need to actually use the ports on the servers. My core problem was, I forgot to take into account the insertion/removal of the motherboard tray, as well as where all my power and networking cables would go, this didn’t leave a lot of room, and it would have to be a bit more permanent than I wanted them to be. This case isn’t out of the question just yet, but I need to do some more real fittings before using it again.

The final, and most important part of this case, is how everything mounts internally. The nightmare of my life, and probably the first time in my life I have done any sort of mechanical design. I have some test designs (current in Photoshop, but going to convert to Illustrator soon) for a reversible tray for 3 PIs. They mount pretty nicely, and with some foam board prototyping, they seem to do the trick. Getting it to the non-prototype phase, to something I don’t mind mounting electronics to and powering, has been… frustrating. Early on I knew that the PIs were going to get mounted to 3mm acrylic, as that is a cheap, non-conductive surface, and it looks kind of cool. However, getting Acrylic cut, is a lot harder than it sounds, especially since the design isn’t super simple. After trying a few shops around my area, I was just about to give up, when I remembered, this is a DIY project, why not… DIY.

Watching TONS of YouTube videos on cutting acrylic, bending it, making boxes, etc… really just let me down the same path over and over, I need a controlled way to cut the damn stuff, and I just don’t have a way to do that cleanly. So as of this morning, I ordered the Inventables X-Carve CNC Machine. It has super easy to use web based software to control the machine, as well as really high reviews on the quality of the system. While it is a bit expensive ($1800 fully loaded), I think I will get my money back with side projects and even doing some cutting for some co-workers. I opted for the 1000mm version (the big one) because I knew I would want to make large things, they do have smaller versions as well. The CNC machine will arrive sometime next week, and I will be setting it up next weekend with the hopes of cutting out my first sets of trays for the PIs and start at least mounting them up for software testing.

And, that’s where we are, so many decisions made, so many plans complete, few designs left, and then the awesome part, the assembly and software installations can start. I’m heading out to Hashiconf in the middle of the month, and have been quite busy with a project at work that is wrapping up after a year, and then I want to spend some serious time finishing this project towards the end of the month, and into the early parts of next month.

There is another topic to discuss about this project, the Alienware Alpha (The Alpha in Alphaberry), but I am going to save that for another blog post.

No comments: