There was a pause with the Project Alphaberry, I had to do tons of hardware/software planning. You see, I really want to get this right, and have it awesome, however, I am starting to hit that point of diminishing returns where planning and not doing is starting to get in the way.
As of today, enough talking, and on with the builds!
Something else really occurred to me the other day, in all of these blog posts, I never really explained “what” Project Alphaberry was, and why I was working on it.
You see, I work in a rather large clustered server environment, I really enjoy the challenges of those large clusters of software all talking to each other, controlling and interacting with them, etc… However, this isn’t something easy to do at home. Project Alphaberry builds, at home, a small scale cluster of servers that allows me to play with things at single to multi-server clusters including high availability scenarios and distributed computing. All the things I really get a kick out of. I really just want to write some small piece of software, push it to the cluster, and use it.
At the end of this post, I have included my list of components, parts, software, hardware, etc… that I am using for this project, I still haven’t picked out my switches yet, but that is something I am waiting to do kinda last.
Originally, Project Alphaberry was planned to build a small little Raspberry Pi 3 cluster for playing around, as with everything, this grew and grew. Most clusters in this manner use a “pi zero”, which acts as the point of entry for the entire cluster, I wanted “more” to work on this zero node, however, I also happened to have that Alienware laying around, hence “Alphaberry”. Since then, the project has grown a bit in scope, I now have 3 “real” computers (Intel NUC 7th gen i3) added for the HA core services (Databases, DNS, Service Discovery, basically all the Infrastructure pieces) and increased the planned Raspberry Pi count from 6 to 24. Right now I am even considering bringing in the new Asus Tinkerboards as the “7th” node in each module, but as of yet, I am unsure where to get one, however I have seen some YouTube video reviews of them, so they do in fact exist. The additions of this would really modularize my cluster nodes quite a bit, as the tinkerboard could act as the controller for the module, but, that might be reserved for a future enhancement, once I actually complete something here.
The core plan is to get the entire infrastructure up and running across the cluster, as well as plan out some sort of enclosure to keep all these damn wires in check, if you didn’t know, having 27 computers with networking and hardware turns out to create a shit load of cables and power bricks… I have already made the investments into upgrading the Alienware, buying the NUC’s (arrives this weekend!!!), and 6/24 of the cluster (a single module). My hopes are that once all of this is setup, it should be just as easy as building a new 6 node module and adding it to the infrastructure.
This actually started to trigger another issue that I didn’t expect, when setting up my first couple of servers, it started getting REALLY tedious, I opted to learn Ansible and get everything under configuration management early on, once I get to a “released” state for v1 of the cluster, I will switch my repository for the configuration management from private to public, I just don’t want to share the trash I have atm and a few hacks that are pretty specific to my setup. The only “crappy” part of my setup, is the DHCP server, which currently lives inside of my Nighthawk Router, I statically configure all of my IP addresses internally, which means I need to boot the box, get the MAC, then update the “shitty” web ui with the new mapping, then I update the inventory file, reboot the box, then run Ansible for literally everything else, I rarely SSH into these boxes anymore. The best addition for my Ansible stuff was automating my DNS server records from the inventory file, that just made my life way easier. I do however, have my eyes on something like a micro PC with pfSense or the Ubiquiti EdgeRouter X to give me much more control over my system. But again, another project for another day!
Below is a copy/paste of my parts/components/software lists.
Project Alphaberry
Systems
Intel NUC 7th Gen i3 Tall BOXNUC7I3BNH
- 2.4 GHz Intel Core i3-7100U
- 16GB DDR4 2133 MT/s (PC4-17000) Crucial Single
- 250GB - M.2 SATA III Internal SSD Samsung 850 EVO
- EMPTY - Internal SATA IIIPort
- Micro SDXC slot with UHS-I support
- 10/100/1000 Ethernet
- 802.11ac 2x2 Wireless
- 2x USB 3.0 (Front, One Super Charged)
- 2x USB 3.0 (Back)
- 2x USB 2.0 (Internal Header Support)
- 1x USB 3.1 Gen 2 (10 Gbps) and Mini DisplayPort* 1.2 via USB-C (Back)
- Intel HD Graphics 620 - Not needed
- PSU 65W - AC Adapter
Alienware Alpha R1 ASM100-1580
- 2.9 GHz Intel Core i3-4130T - upgrade options
- 4 GB DDR3 @ 1600 MHz - upgraded to 16 GB
- 500 GB SATA @ 5400 RPM - Upgrade to 256GB SSD
- 10/100/1000 Ethernet
- 802.11bgn Wireless
- 1x USB 2.0 (Internal Bottom)
- 2x USB 2.0 (Front)
- 2x USB 3.0 (Back) - not color coded to blue
- Nvidia Maxwell GTX GPU 2000 MB Memory - Not needed
- PSU 130W - AC Adapter
Rasberry Pi 3 Model B Amazon
- 1.2 GHz ARM Cortex-A53 Heatsink
- 1 GB RAM @ 900 MHz
- 32 GB Micro SDHC Samsung Evo Plus - Speed Tests
- 10/100 Ethernet
- 802.11n Wireless
- Stackable Case
Accessories
Tools
- Micros SDHC Card Reader (USB 3.0) - for disks on dev PC
- Load Test Power Usage PowerJive USB Voltate/Amps Power Meter Tester
- Rufus Burn bootable images to SD cards
Software
Operating System
NUC
- CentOS 7.x Minimal
Alpha
- CentOS 7.x Minimal
Berry
- Raspian Jessie - It isn’t clear that this is 64bit or not, if not, just using centos is a better option for now
- CentOS 7.x RPI3 - How To
Software By Server Type
Administration Machine
- Consul - write kv to the cluster, register services, etc…
- Nomad - run stuff in the cluster
- Vault - get/set things in the vault
- Packer - create images and such
NUC (x3)
- DNS Server - I should learn bind, instead of pretty UI as above
- Consul Server
- Nomad Server
- Vault Server
- Elasticsearch
- InfluxDB
Alpha (x1)
- Consul Agent
- Nomad Agent (Docker/Remote_Exec/Local_Exec/Java Engines)
- Docker Engine -
curl -sSL https://get.docker.com | sh
- Instructions - NGINX
- Nomad UI - temporary until we get good dashboards - Dockered
- Kibana - Dockered
- Grafana - Dockered
- Telegraph (StatsD Server) - Connect to InfluxDB
- Logstash - this will act as our syslog server and log parser (nginx) - Dockered
- Beats? - Might be interesting to see the overhead of adding the metrics and packetbeat.
Berry (x24)
- Consul Agent
- Nomad Agent (Docker/Remote_Exec/Local_Exec/Java Engines)
- Docker Engine -
curl -sSL https://get.docker.com | sh
- Instructions - Beats? - Might be interesting to see the overhead of adding the metrics and packetbeat.
No comments:
Post a Comment