SENG&SM Network, Servers, and more..

The Southeastern Narrow Gauge and Shortline Museum in Newton, NC will forever hold a special place in my heart. Not only did it allow me to volunteer and help with the restoration of historic artifacts and documents, it also allowed me to take my first dive into some of the types of technology that would become part of my core interests. These ranged from networking, to server hardware, and even hypervisor technology.

It all started when I was approached by the board with a request to “install wifi”. While simple at first, this evolved into a large-scale campus-wide network installation featuring multiple switches, access points, and links between buildings. Before this point, my experience with networking hadn’t gone beyond the simple configuration of in-home routers. I had a basic understanding of network switches, but I wasn’t even completely sure what an access point was or how it worked. Through past projects, I had some experience with Ubiquiti’s long-range wireless equipment which, through research, eventually led to my discovery of the Ubiquiti Unifi line of network devices. While still considered lower tier than the other networking giants like Cisco or Meraki, Unifi devices were still a great starting point for learning the ins and outs of the field as a whole. The growth of the network was slow and unsteady, primarily due to the very limited funds available to the museum. This led to multiple instances of compromise and having to take advantage of the limited resources available.

The Beginnings

The campus featured 4 primary buildings: the historic depot, the model railroad center, the storage building, and the pavilion. The earliest installation of hardware was used solely for the depot, which is also the location of the ISP uplink. This installation used an Edgerouter X and a Linksys router operating as an access point. The next phase featured another Linksys router within and a connection to the storage building using an outdoor cat5e cable hung between the two buildings. Inside the storage building, a connection was then extended through the building before passing through an exterior shared wall to the model railroad center, where yet another Linksys router was used as an access point.

The earliest installation within the depot

In the months following this initial “band-aid” installation, Unifi hardware was slowly purchased and implemented to improve the stability and reliability of the network. One hang-up for Unifi, however, was the requirement for a server to run the network controller software or dedicated cloud-key, a device made by Ubiquiti solely for hosting the network controller. This was solved with the installation of a server that had been gifted to me and a friend a few years before this project. The server in question was an HP ProLiant DL580 G5, a massive 4U server that pulled so much power, the cost to run it was never justified. This, while very basic, was my first introduction to server hardware and the birth of the server “Halios” whose name I still use as a representation of my projects as a whole to this day.

The HP ProLiant server at its initial installation location in the storage building

As the funds increased and more hardware was purchased, the need for better hardware with a dedicated installation space became apparent. Through a discussion with a higher-up at the museum, the discovery of a practically unused closet-sized space in the storage building was made. After approaching and receiving permission from the appropriate parties, the room became the official “server room” for the museum.

The initial server room configuration

When I first moved into the new room, the first thing I completed was an upgrade in the electrical infrastructure for the room. This mainly consisted of the installation of an outlet for use with a desk in the room and a pair of outlets for use with the servers. Following this, I moved the table and finally the server into the room. This configuration, however, was short-lived as an amazing gift was given to me by a good friend from a former internship.

The Server Rack Begins Taking Shape

For future expansion to continue, I knew I had to find the means to house and secure the hardware that would make up the network. This marked the beginning of my search for a server rack, which ended with a friend from a former internship not only allowing me to obtain one from him, but doing so without charging either. I can say with certainty, this generous act will forever be remembered by me as it truly allowed my interest in these technologies to explode well beyond what it was at that time.

The rack immediately after its installation

Once the rack was installed, the next major project could finally take place: the installation of a patch panel and dedicated cat5e cable runs. The initial runs installed were replacements for the existing backbone lines, which had consisted of pre-terminated and heavily used cables. At the same time, the cable between the depot and the storage building was finally run underground through a buried conduit. This eliminated many of the existing issues that came with the cable being strung over a gap that vehicles pass through regularly.

With the installation of a rack and patch panel, the next step was the purchase and installation of a larger full 1U switch with PoE and Unifi Cloud Key Gen2 Plus for use as the network controller and NVR for IP cameras that would be installed throughout the campus. During this growth period, an additional server was also installed in the rack by a colleague of mine for use as a plex server. With all of these additions, the rack was finally starting to look fuller and more official.

The rack following the installation of the patch panel, switch, Cloud Key, and Plex server

The Network is Completed

With the network covering a majority of the campus, a new project was started at the request of the board: the installation of a campus-wide surveillance system. The project kicked off with the installation of the Cloud Key and two cameras in the storage building. This was followed by the installation of an additional two cameras, one for the rear of the storage building and the other for the inside of the model railroad center.

In the months that followed, multiple waves of camera installations took place. The next major project was the installation of cameras around the exterior of the depot. This included a total of 4 cameras and required the installation of a new 8-port PoE switch. Around this time, the Edgerouter X was also replaced with a Unifi Security Gateway 3, thus completing the full Unifi ecosystem.

The network panel in the depot after the new switch installation

The next major phase was the pavilion finally being connected to the museum network. The pavilion is a large full-scale rolling stock display pavilion located across an active rail line from the depot and other campus buildings. To solve this issue, I implemented a point-to-point wireless bridge between the storage building and the pavilion. The transceivers were mounted at a high enough altitude that there was no interference from passing trains and other objects between the buildings. In the pavilion, 3 cameras and an access point were installed using an 8 port PoE switch to power and connect everything. With the completion of this project, the entire campus was officially covered by the new network.

The “hub” for the pavilion installation with the cover removed on the weatherproof enclosure. The transceiver for the wireless bridge is also visible

Within the month following the installation of the pavilion infrastructure, the final major infrastructure project was started: the installation of a buried fiber line connecting the depot to the storage building. With the help and donation of hardware and equipment from a good friend, the fiber was pulled through the existing buried conduit between the buildings and run to the appropriate network closets and rooms in each building.

The installation of this fiber line also finally allowed the gateway to be moved from the depot at the modem to the server room within the rack. This, in turn, also allowed the eventual replacement of the USG 3 with a rack-mountable USG 4.

The Start of the “Away-From-Home” Homelab

With a solid network backing it up, my obsession with hardware in the field only grew stronger. Due to this, when browsing craigslist for fun one morning, I stumbled upon a posting for multiple Dell Poweredge R710 servers. On a whim I decided to contact the seller and inquire about pricing and availability and to my surprise, not only did he accept my lower offer, but he was willing to meet the same day for the exchange. Initially, I purchased a single server with one 300GB 10K SAS drive, no blanks, and 2 missing internal fans. Following class that day, I immediately hooked the server up in my apartment and began configuring it. At the time, I had a very basic understanding of virtualization but didn’t know anything about how to set it up. After researching the topic heavily, I made the decision to install and run VMWare ESXi 6.5 on the server as the hypervisor and create one Windows Server 2019 machine. This marked the birth of “Andromeda”. This was also my first time experimenting with a hypervisor and any modern windows server installation (the previous HP ProLiant “Halios” server ran Server 2008 R2).

Once the server was installed in the rack and operating properly, I had an itch for at least one additional server. This, of course, led to the decision to purchase the last server the previous seller had available and configure it exactly the same using ESXi. This marked the birth of “Apollo”, my third and final server to call the museum its home. Following this purchase, I also purchased cosmetic blank and brush panels, a rack monitor mount, and a 1200VA Tripplite UPS. A month later, I also personally purchased and installed a 24 port non-PoE switch for connecting the servers and local devices within the rack and server room. The rack was finally starting to look “professional”, which of course made the geek in me VERY happy.

The rack with the new servers, hardware, and cosmetic additions

With the installation of new hardware and resources, my experimentation with the software side of things started and grew my interest even more. The first major project I completed was the installation and configuration of Active Directory for the museum to use with the various computers throughout the campus. This involved the implementation of a domain controller, file server, and print server on the first server “Andromeda”.

The “Apollo” server was used primarily for hosting Minecraft multiplayer servers and various other small projects including a discord bot and file share for my friend’s plex server. The following month marked the start of the global COVID-19 pandemic and months of being too busy to do much work on the network and servers. This combined with the lack of local access to the servers made it hard to get any major project off the ground as file transfers and even remote control of the servers would be slow and unstable. This led to the start of my personal Homelab that is my primary resource for projects and experimentation to this day. I will talk more about my personal lab in a different post centered around its start and eventual growth.

The Network Today

To this day I still manage and maintain the network throughout the campus. Various small projects have been completed in the months that followed including the addition of new cameras and access points as well as new wired connections as the museum grows as a whole. While I no longer use the hardware and resources at the museum for personal projects and experimentation, it is still a passion project of mine that I continue to grow and expand over time.