Enabling HBA mode on my dl380 g6 with RHEL 8

If you are familiar with the HP server line, you likely know that the DL380 G6 (as well as many other models of their servers), you would know that the integrated RAID controller, the P410i, doesn’t support HBA mode for disks. Since I needed more storage then anything, and don’t want to spend a fortune on high capacity SAS disks, I decided to use SATA.

A common issue with this particular controller is that it has a battery backup that seriously hurts performance when it is dead. It also is AWFUL for any modern storage system, that relies on commodity disks in place of hw raid on a proprietary controller. It’s a pain and very restrictive.

The solution used to be to get a different RAID controller, and just avoid the integrated one. Unfortunately, the system will only boot off of 3 different locations:

  • The integrated RAID controller
  • The ONLY SATA port on the mainboard, specifically used for the optical drive
  • A USB drive or memory stick

This made me come up with a different solution. I took out the RAID expander (allows the p410i to address up to 16 disks), and replace it with a patched LSI controller (LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]). This allowed me to address 8 of the disks separately. I then installed a 16gb patriot usb drive (USB2.0) into the internal USB port, to act as the main boot drive. This is where the bootloader, kernel and initramfs live, and let’s the system continue to boot.
I have 2 SSDs installed on that card, which is holding the following filesystems:

  • / – RAID1 (mirrored) for redundancy
  • /var – RAID1 (mirrored) for redundancy
  • /home – RAID0 for speed. This is only for local users, and will not be used much at all
  • swap – RAID0 for speed. Not needed much, since I have 128GB of ECC memory installed.

It wasn’t that easy of course. I had to add a “Driver update disc” to get the card recognised properly, as RHEL 8 disabled the device in the mpt3sas driver that’s included. This meant adding the following to the kernel parameters on the RHEL installer boot:


This allowed the installer to recognise the device, and installed the updated driver permanently, so it would continue to work after the installation was done.

It’s booting stably, and made me ready for the next part: Enabling HBA mode on the internal controller.

I used a medium post by Terryjx to get started, and figure out what I needed. After reading through the entire article, I decided I would use a 5.4 kernel on this system, so I can use the patched driver. That meant using the elrepo kernel.

To enable that kernel, run the following commands:

$ sudo dnf -y install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
$ sudo dnf --allowerasing --enablerepo="elrepo-kernel" install kernel-lt{,-devel,-headers,-modules,-modules-extra}

As of this writing, the current lts kernel in elrepo is 5.4.98-1.el8.elrepo.x86_64. I then rebooted to make sure that kernel worked, which it did without issue. Didn’t even need to install a new mpt3sas driver, as the correct on is included in the modules.

I was now ready to enable hba mode, but that required that I have the 6.64 firmware for the p410i controller. I used the link from the above article: https://downloads.hpe.com/pub/softlib2/software1/sc-linux-fw-array/p332076214/v110820/hp-firmware-smartarray-14ef73e580-6.64-2.i386.rpm. I just had to install it via:

$ sudo dnf install hp-firmware-smartarray-14ef73e580-6.64-2.i386.rpm
$ cd /usr/lib/i386-linux-gnu/hp-firmware-smartarray-14ef73e580-6.64-1.1/
$ sudo cp ccissflash /usr/bin/ccissflash
$ sudo bash hpsetup

It will take a while to complete. Just answer it’s questions and let the update complete. You’ll have to reboot once it’s done to allow the firmware to recognise the change.

To do the next few steps, I needed a couple of other repositories. Namely, the EPEL repo, and the RHEL equivalent to the CentOS PowerTools repo, called CodeReady. This took a bit to figure out, as whenever you google for RHEL PowerTools equivalent, it just comes up with PowerTools. I eventually had to search each repository to find the correct one with dkms and pandoc. Yes, dkms is required to ensure the custom module is built and installed with each kernel update.

$ sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
$ sudo dnf config-manager --set-enable codeready-builder-for-rhel-8-x86_64-rpms
$ sudo dnf group install "Development Tools"
$ sudo dnf install dkms pandoc

In order to install the new driver, you need to be root to enable dkms building. A symlink will be created from the cloned repository to the dkms directory in the source tree, allowing you to build the module.

$ sudo -i
# git clone -b dkms https://github.com/artizirk/hpsahba
# cd hpsahba
# make ##Build the helper program hpsahba
# ./hpsahba -E /dev/sg0 ##Use the correct location of your controller, /dev/sgN where N is the number
# cd contrib/dkms
# ./patch.sh 5.4
# dkms add ./
# dkms install --force hpsa-dkms/1.0
# modprobe -r hpsa ##Remove the existing module
# modprobe -v hpsa hpsa_use_nvram_hba_flag=1
# echo 'options hpsa hpsa_use_nvram_hba_flag=1' > /etc/modprobe.d/hpsa.conf
# cp -v /boot/initramfs-$(uname -r).img /root ##Backup original initramfs
# dracut -f -v ##Generate a new initramfs

Let’s explain what happened there a bit, line by line

  1. Become root
  2. Clone the repository containing the patches for the hpsa driver, and the hpsahba program source
  3. Enter the directory that was cloned
  4. Build the hpsahba software
  5. Enable HBA mode on your p410i (make sure the firmware is version 6.64)
  6. Enter the dkms directory to setup the module build
  7. Run the provided script to patch the given kernel version. Check the repo’s kernel directory for supported versions.
  8. Add the patched module to dkms
  9. Build and install the patched module. –force is required in order to replace the in tree module with the patched one.
  10. Unload the in tree hpsa module
  11. Load the patched hpsa module. The hpsa_use_nvram_hba_flag=1 is required for this to work
  12. Added the hpsa module load options to make sure it always loads with that flag
  13. Backup the original initramfs
  14. Generate a new initramfs with the patched module.

You should now be able to to do an lsblk and see all of the attached disks. For example, my lsblk on the server I did this on:

sda               8:0    0   1.8T  0 disk
sdb               8:16   0   1.8T  0 disk
sdc               8:32   0 931.5G  0 disk
sdd               8:48   0 931.5G  0 disk
sde               8:64   0 931.5G  0 disk
sdf               8:80   0 931.5G  0 disk
sdg               8:96   0 931.5G  0 disk
sdh               8:112  0 931.5G  0 disk
sdi               8:128  0   1.8T  0 disk
sdj               8:144  0 931.5G  0 disk
sdk               8:160  0 931.5G  0 disk
sdl               8:176  0   1.8T  0 disk
sdm               8:192  0   1.8T  0 disk
sdn               8:208  0   1.8T  0 disk
sdo               8:224  0 232.9G  0 disk
└─sdo1            8:225  0 137.1G  0 part
  └─md127         9:127  0   274G  0 raid0
    ├─rhel-root 253:0    0    70G  0 lvm   /
    ├─rhel-swap 253:1    0     4G  0 lvm   [SWAP]
    ├─rhel-var  253:2    0   100G  0 lvm   /var
    └─rhel-home 253:3    0   100G  0 lvm   /home
sdp               8:240  0 232.9G  0 disk
└─sdp1            8:241  0 137.1G  0 part
  └─md127         9:127  0   274G  0 raid0
    ├─rhel-root 253:0    0    70G  0 lvm   /
    ├─rhel-swap 253:1    0     4G  0 lvm   [SWAP]
    ├─rhel-var  253:2    0   100G  0 lvm   /var
    └─rhel-home 253:3    0   100G  0 lvm   /home
sdq              65:0    1  14.9G  0 disk
└─sdq1           65:1    1  14.9G  0 part  /boot

That’s 8 drives attached to the LSI controller, and 8 attached to the integrated p410i, exposed directly to the OS.

I have no benchmarks for this yet, but can confirm it’s stable and working exactly how I expect it to.

Installing CentOS without CentOS

DISCLAIMER: This is a log of what I’ve done while trying to get something to work. Not a tutorial. Follow this at your own peril.

Recently, I was looking for a new Dedicated server host that is affordable, and relatively un-managed. The provider I chose to try will remain nameless, as it’s not important.

I want CentOS8, but they don’t have a working centos 8 image, so I installed centos 7, thinking I could upgrade. When logging in to the shell for the first time, I notice that the 3 2tb hard drives are all together in a single raid1 array. I don’t need redundancy on this machine, as I have backups that run hourly saving the difference efficiently in borg-backup. Data-loss on this server is not a concern.

Rebooting my machine into rescue mode launches a PXE ramboot of debian buster. With this, I could get the right architecture for a stable server while maximizing my storage space. Ensure your partitions include the bios_grub flag on the very first partition (about 2mb) if you are using gpt on a bios system.

I need to use YUM on debian?

Yes. Debian did not seem to have a version of dnf I could use, so I had to use YUM to bootstrap my system.

apt update && apt install -y yum

Pretty straight forward. No surprises so far.

In order to use YUM, I needed to configure the base centos repo. Luckily, since I have many such servers at home, I was able to get the right format and location of the mirror reference.

mkdir -pv /etc/yum.repos.d
vi /etc/yum.repos.d/centos.repo


# CentOS-Base.repo
 # The mirror system uses the connecting IP address of the client and the
 # update status of each mirror to pick mirrors that are updated to and
 # geographically close to the client.  You should use this for CentOS updates
 # unless you are manually picking other mirrors.
 #  If the mirrorlist= does not work for you, as a fall back you can try the
 # remarked out baseurl= line instead.

 name=CentOS-$releasever - Base

Now the astute (not me) will notice right away that there is a problem with this. YUM on debian doesn’t have a ‘$releasever’ setting. Lucky for us, we can define that, along with an install root, using yum itself.

I also had to populate the contents of the referenced GPG key, which again, as a straight copy from one of my home servers.

Let’s not also forget to install what is needed for a chroot to finish the install:

yum --installroot=/mnt --releasever=8 install basesystem dnf bash openssh openssh-server

Then, I had to prepare the chroot by binding /dev from the host, and mounting /proc and /sys in the chroot. Once prepared, it’s a simple:

# chroot /mnt /bin/bash
# export LANG=en_CA.UTF-8
# dnf -y --releasever=8 groupinstall "Minimal Install"
# dnf -y install kernel-core

It’s then time to install the bootloader, in this case it’s grub2:

# grub2-install /dev/sda
grub2-install: error: /usr/lib/grub/i386-pc/modinfo.sh doesn't exist. Please specify --target or --directory.

Whoops. That’s not what I expected. Turns out, to do an auto install, it needs an extra package called ‘grub2-pc-modules’:

# dnf -y install grub2-pc-modules

Now, let’s try it again:

# grub2-install /dev/sda

Next, I have to make sure the ssh server starts on boot, and that my root user can login via password for now. I can’t stress enough, this is for now, until I get my ssh key in place.

Next, we have to make sure fstab is in place properly. I suggest UUID’s where possible, and lv names when available.

Once you have all your tools installed as needed, password set, fstab created and grub2 installed and configured, force dracut to create a new initrd.

# ls /boot/initramfs-*

We aren’t going to worry about the rescue image, leaving the initramfs with ‘4.18.0-147.5.1.el8_1.x86_64’ as the version string. This matches the kernel, and will need to be passed to dracut.

Next, backup the original initramfs and generate the new one:

dracut -f /boot/initramfs-4.18.0-147.5.1.el8_1.x86_64.img 4.18.0-147.5.1.el8_1.x86_64

Once the new image is created, cross your fingers and pray you didn’t miss anything. After I confirmed the reboot, I started pinging the IP address to wait for it to come up. I gave it about 5 minutes before giving up.

Without an IPMI console, or some form of remote attaching to the actual install process or machine display (like OVH’s IPMI over IP), this is a reach to far.

Any advice?

Linux Kernel Code of Conduct, and the FUD being spread about it

Recently, the Linux community went through a bit of a transition when Linus decided to take some time to reflect and improve his behavior. The problem he identified is that just because he’s passionate, doesn’t mean he should use the harsh language he used to use, and probably still does (with email filters to correct it) to rectify the issue.

I for one support having a Code of Conduct, a guide to behavior, for people who don’t know better. Notice how I said “Don’t know better”, not “are a bunch of assholes”. There are people who simply don’t know how to act all the time, myself being one of them at times. I am a socially awkward individual, who doesn’t always read a situation appropriately. This turns into bad jokes, in appropriate comments, and sometimes me looking like an asshole, even if that wasn’t the intent.

I have worked on and improved my behavior, with the help of a very patient wife and very supportive family. Not everybody has that.

What I identify as the main issue with the previous code of conduct, aka the “Be excellent to each other” doctrine was that’s all it really said. Foul language and disrespect of others as human beings happened, by the people who are paid to develop the Kernel (aka, professionals).

I don’t believe that everything needs a code of conduct. Like SELF, whose organizers said they don’t believe a code of conduct is required for them. In person, people tend to not be the same way as they are when behind a keyboard. The anonymity of the internet, even when you personally know somebody offline, leads to aggressive behavior. Because the text we type doesn’t impart a tone, no one really knows if that “offensive thing” was meant as a joke or not.

The fact of it is that the Linux Kernel is now larger then a simple community project. It’s got funding, corporate backing, and multiple major organization contributing to it. It’s a professional thing now, meaning those contributing to it should know what to expect in terms of culture and communication standards, as well as technical standards.

But it won’t be a meritocracy anymore

This is the biggest thing I’ve heard from those detracting, and spreading the FUD. When I ask why it won’t be, I never get a straight answer. Some refer me to “far-right” conspiracy theory videos, others call me names and insult my intelligence. Both of these camps are guilty of one thing: proving the need for a Code of Conduct.

Linus, being the man that he is, would never accept bad code just to be “nice”. That’s total BS, and an insult to legacy he has created. He’s just realized maybe he doesn’t need to use foul language or throw insults to get his point across. Does that mean code quality or acceptance criteria will change? Hell no. Just means we stop getting hundreds of articles about another “Foul Mouthed Linus Rant” per year. I’d rather hear about the technical achievements of the kernel developers, not the drama behind a rant. If you consider yourself a supporter of a meritocracy, then you probably should to.

This is a plot by horrible SJW’s to kill the community and destroy our world!

This is the craziest conspiracy theory I’ve heard. I’ll be honest, being a level headed guy that I am, I had to look up what SJW meant. For those who haven’t learned it, it’s Social Justice Warriors. They are at the other extreme in this debate, but I’ll get to them later.

Since the Code of Conduct in the kernel is based on the Contributor Covenant, written by a prominent self proclaimed social just warrior (Coraline Ada Ehmke), that she somehow has direct influence over the Linux Kernel development team. As far as I can tell, her only involvement in the Linux Kernel community was indirectly by there choosing to base the Code of Conduct on something she wrote. She has no say in how or why the CoC is enforced, or even what they change to suit the community. In other words, she has no power in Kernel Development community. None. No say, no ability to enforce, no ability to push agenda’s.

I was accused, when asking for evidence from those who believe this, that I was an ignorant sheeple, or a libtard, or a liar, just by asking for facts instead of wild conjecture. I won’t name names, as that’s what children do to pass blame around and build hate, but it makes no sense that someone can present obvious flaws of logic and reason as truth.

Linus and the majority of top contributors have endorsed it, which is the only reason it exists in the repository. No outside pressure, no ulterior motives, no snap decisions. Just a realization that a guideline is needed to create a more welcoming community.

Linus is being coerced and forced into this against his will

Linus Torvalds? Pushed and forced into something? Give me a break. This is the guy who called out Nvidia, Intel, AMD, Microsoft, and governments. He’s technically minded, not socially absorbed and doesn’t strike me as the type of person who cares what others think of him, or gives in to pressure. Moving on from this one, as it’s just crazy.

I saw no problem, and don’t understand why this is necessary

This one is less of a FUD and is more akin to ignorance then malice. For the majority (vast, vast majority) of users, this is a non-issue. For some, it’s the first mention of a problem in the Kernel community. For those of us that follow kernel development, it’s either long overdue, or the worst possible decision (see above for more explanation of that). For the rest, who are just trying to understand the issue, and why it was any kind of issue to begin with, I refer you to Linus’ post on the Linux Kernel Mailing List.

The TLDR version is Linus was presented with facts about his treatment of some individuals. He reflected on it, and decided he had a problem that needed fixing, and that his behavior caused issues for many potential contributors. He decided, after that reflection, that he needed to fix it.

Sounds good to me. I’d like to think that if presented with an issue of my behavior, I’d have the strength of will to take a good look at myself and figure out if there is an actual issue or not. I’d also like to think that I can be objective enough to correct those issues. I also know that as a human being, I make mistakes and should strive to improve myself in whatever way I can.

The other side of the problem

The other side of the problem, the extremism of some SJWs. A very prominent Kernel Developer was being called out on horrible things that they brought to a logical fallacy, accusing him of being a rape apologist. I don’t personally know him, but have yet to see anything akin to that kind of behavior from him. It seems to me that the only reason he was targeted was because he doesn’t support a Code of Conduct.

Seriously guys, the ones who claim to want to end extremism and make the world equal to all do the exact opposite in there treatment of him. They are just as wrong as the conspiracy theorists.

The middle ground

I’m firmly in the middle ground on this. I support having a more inclusive Code of Conduct, and understand the reasoning behind it. I can also see the point of view of the those that think it’s not needed (not the conspiracy theorists, the reasonable, logical opposition). My political views tend towards the “left”, but much closer to the center. I believe that to far in either direction leads to distrust, anger and hate. I also believe that the current political climate in the US (I’m a proud Canadian) has led to even more discord then usually, with terms such as “Libtard” and “Trumpian” being tossed around like it’s normal.

One thing history should have taught humanity so far in our journey together is that extremism on either side of the political spectrum leads to violence and hate. We need to work as a group to keep the extremism from influencing our decisions. We need to step back, not panic, and objectively look at verifiable facts. In that spirit, these are the facts as I have come to understand them.

  • Linus was shown objective evidence that his behavior had a negative impact on the kernel community as a whole.
  • Like the reasonable and logic person he is (from the vast majority of his communication, not the few rants he made that make the news) he stepped back and examined the evidence.
  • He accepted that he needed to make a change to better the whole community.
  • He made the initial change of adding a Code of Conduct, and the broader change of working on his personal behavior (including email filters, which is helpful for many of us).
  • The community continues to openly discuss and modify the Code of Conduct to better suit the community as a whole.

There is 0 evidence of a conspiracy against the kernel development community. There are 0 facts supporting the conspiracy theories being touted as logic. There is also plenty of evidence supporting the need for a Code of Conduct.

So it’s a perfect Code of Conduct then?

No way. Far from it, actually, but it is a start. The beginning of the end to some is the start of a new existence to others. For the rest of us, it’s just business as usual, and until something comes up to change my perspective, and please try to do so in a respectful way, I’ll remain supportive of action that creates more inclusive environments. I’ll also do my best to try and spread fact based evidence when conflict arises.

I believe that a person is generally smart enough to be able to reason through evidence and come to reasonable conclusions. The ones spreading conspiracy are a very loud minority that the rest of us need to try and take with a grain of salt to find the facts in what they spread. Their conspiracies are based on some fact, but lead to jumps in logic that just don’t add up.

To end this post….

Just in parting, I want to say that I am a Christian. I believe that Jesus Christ is the Son of God, and came to give us a choice in our own salvation.

I also believe that faith should not be mixed with politics. That reason and science are the basis of truth, and that it doesn’t contradict my faith.

I’m also human, and able to make mistakes. That those mistakes don’t define me, and that I can grow and learn from them. That ignoring or not learning about history dooms us to repeat it.

I also want to state, for the record, that just because I don’t agree with someone means I don’t respect their opinion. I just don’t respect it when it impedes someone else’s basic human rights.

In the end, I am an expert on my own opinion, and welcome discussion about it.

So about that switch….

I did it! I moved my primary home server to FreeBSD 11, running with zfs, and managed to keep all of my home services running without issue.

Except there was an issue.

I have no idea why, but it seems that the network scheduler on FreeBSD does not play nicely with Emby. Whenever I tried to watch my media while away on vacation, I go horrible lag and terrible streaming playback.

I’m not expecting much, but have more then enough bandwidth to support a 1080p h.264 stream from anywhere. It only ever happened on FreeBSD, as CentOS never had this issue. Just to test it, I put CentOS back in place on the server, and it is happily running without needing so much as a kick when it comes to streaming, email, system backups and PXE.

Sorry to say, FreeBSD left a lot to be desired in the performance department.

It does make me think that FreeBSD deserves a closer look as a base operating system for other use cases. I’m still considering it for running some of my websites, but will have to run some more tests, and hopefully can find a solution in the end.

Pondering a Switch

It’s no secret that I’m an open source nut. I love Linux, and use it exclusively as my desktop operating system, and my server operating systems.

The great thing about open source is the choice. There are choices for operating systems, choice for web browsers, choices for timers, choices for pretty much any type of software you need! I’m thinking more about the first choice though.

I’m thinking of FreeBSD for my servers.

Don’t get me wrong! I have been using CentOS since version 4, and have had Debian and Ubuntu servers before (but they never lasted that long before the switch back to CentOS). I love the way SELinux is integrated, and always ignored the “disable SELinux” part of any documentation I used to help learn a new aspect of the operating system (seriously, never disable SELinux).

But, I’ve been learning a lot about FreeBSD over the past few years, and have even setup a couple storage appliances (for fun mainly) using FreeBSD 10.1 (at the time) and recently FreeBSD 11. It’s a great operating system. Works extremely well, and I haven’t found much of a learning curve compared to CentOS for my storage needs (ZFS on both may have helped), but haven’t tried replacing my home server with it before. My wife would kill me if she loses access to her photos, music and movies that I have stored on CentOS 7 right now, so I haven’t made the jump.

But now, I’m tempted. Let me explain.

I’ve been using Docker containers to run some software on my CentOS rig. Mainly Emby (don’t trust anything that connects outside of the home), a transmission torrent daemon (serving mostly ISO’s of operating systems, including CentOS and FreeBSD), and MPD for streaming music anywhere in the house (that’s another post, if anyone wants it). Docker is great, and works very well for what it is.

But Docker containers aren’t FreeBSD jails.

Jails separates the things so much better then Docker does, in what I have tested. It has the ability to completely separate the host system from the jails, and give access only to what it needs, during creation or after creation. Docker makes you define all that up front, which makes it a pain to limit it. For instance, my Emby server is limited to 1GB of memory, my transmission container is set to 1GB of memory, and my MPD container is limited to 1GB of memory. The reason they are all 1GB? I had to set it at container creation, and wanted to make sure things would run smoothly.

Jails on the other hand can have their limits modified during runtime with the rctl command, which means I can play with resource limits without having to shut down the jails all the time!

Jails aren’t the only reason though. The other reason is native ZFS support. While you can use ZFS on CentOS, thanks the the zfsonlinux project, and open-zfs, you can never be sure when a system update will break your ZFS install and require a great deal of manual intervention. That was the case with CentOS 7.4 release. ZFS broke, which means my data wasn’t accessible (music, pictures and movies) while I was fixing the issue. 3 reboots later and it was back to normal, but still, it was a pain in the ass when I have other things I need to do.

I’m still experimenting, and haven’t made a hard decision yet, which is partly why I wrote this post. I need help with the pros and cons. Please leave me a message and give me your opinion.

Random overload!

What is happening to my server?!

I run a bunch of servers, most of them tiny and used for redundancy, but one that is central to my business. Lately, I’ve been noticing that the main server is consistently being bombarded by high CPU usage and multiple crashes, which, when it’s the primary email handler for my clients to access their incoming email, is a big problem.

Full disclosure with this: I’m not going to give away much in the way of details (log messages and such) as I don’t want to risk any privacy breaches with my clients data.

Where for art thou logs?

Seeing as how I’ve never run into that problem before on CentOS 7, or CentOS 6 in the years I’ve been using them both, I started investigating. Problem is, I couldn’t find any reason for WHY the system kept crashing. In my ignorance, I turned to the guys over at the Sysadministivia to get some advice. (Those guys are great by the way).

Brent got back to me really quickly and told me that CentOS 7 doesn’t store the journal persistently by default! How crazy is that?! After turning on persistent journaling (they even told me how to do that) and remote logging (rsyslog is still amazing), so I could at least go through the logs next time the problem happened.

An excerpt from Brent’s response to me:

Before we go into anything else, I should note that the default CentOS 7 behaviour for journald is "auto" storage, meaning: log to volatile memory (RAM) if the directory /var/log/journal does NOT exist (and it
doesn't, in default cases). If you want persistent logging (and it sounds like you do), you can either:

- uncomment "#Storage=auto" in /etc/systemd/journald.conf and change to
"Storage=persistent" (in which case it will force-create the directory if it doesn't exist), OR

- simply just mkdir -p /var/log/journal

The Problem is found!

When it did happen again, about a week later, I was able to examine all of my logs, and lo and behold, found so many references to PHP FCGI processes crashing from a lack of resources (DDoS), always from the same IP range (who knew Russia was so interested in my Small Business?) that I was able to just mass drop of all packets and requests coming from them. If I didn’t keep my systems patched religiously, I would be in much bigger trouble right now!

Lessons I learned on this issue:

  • Ask experienced people for advice when you need it. I don’t have any kind of formal training in Systemd and Journald, so was very confused why I couldn’t examine my logs using the provided journalctl tools.
  • Having the redundant servers backing my primary server is great, as it kept all of my services running without issues.
  • Keeping all of my servers updated and patched daily, in sequence so I never have an outage, is a great way to run small business servers.

Next is playing with a better, more automated way of updating and rebooting my servers in sequence.

Thanks again to the guys at Sysadministrivia for guiding me in being able to actually get the information I needed to fix the issue. If you want to hear their comments about it, check it out at their website, or follow the link to S2E3: Ass-Backwards Passwords, and check their show notes for their response to my email.

Taking a rest from RESTful

I have a lot of projects on the go. As I’m sure most developers do, I get curious and decide I need to try and make something that’s been done a hundred times before.

What I have I done that’s been done a hundred times before? Rolled some dice. It’s been done physically, and in almost every programming language I can think of. I did it in high school in QBasic. I did it in college in C, C++ (using OOP methodology), Fortran, and even COBOL. This time, I did it in PHP.

Why did I put myself through the bother of creating a dice roller in PHP that’s been done a hundred times before? The short answer is because it gave me something to do that has nothing to do with any paying project I have on the go right now.

The long answer is that I miss playing D&D. I used to play at least twice a month, over skype, with my brother in-law and a friend of his. It was a small crew, my brother in-law was both the DM and the fighter of the party. His friend was a warlock, who was completely obsessed with finding new books. Not spell books mind you, but books. Last time we played, he had to leave his pack behind while we went off to talk to a Dragon, as he was afraid his books would get burned. I was the unbalanced Moon Elf wizard, who suffered a traumatic brain injury as an apprentice. This basically caused my character to roll randomly on a chart for every decision that had to be made. This has resulted in some pretty funny and scary situations, but my party has adapted and sometimes casts silence on me to prevent me from saying something stupid.

Anyhow, back to the reason for the dice roller in PHP.

We haven’t been able to play for a long time. We are all quite busy in life, and keep missing the opportunity to pick up the game again. Part of the issue is the fact that we need to be rolling so much, so our 3 hour sessions sometimes turn into 6 hours while we figure out what should actually happen and all the dice results to come back.

By making a new dice system, I’ve started the building blocks of being able to make faster decisions, publicly available to anyone who logs into a game server. This may be a small part of a much larger project that I’ve started for myself, but it’s one I enjoy working on.

Originally, I was going to turn this all into a restful system that can be plugged into any system (including mobile apps) to have dice rolls done based on anything that is a valid D&D dice string. In other words, send the server the phrase “2d6+3” and you will get a value anywhere from 5 to 15.

I’ve decided that creating a complete RESTful interface to ask for a dice roll was a little much. Instead, I’ve created a library that can be used in any PHP application (even a RESTful one if you want) to roll the dice in any project. I’ve gone back to my roots of creating libraries that can be plugged into applications, instead of the current method that is popular of making everything a service. For this project at least, I’m taking a break from REST. I don’t have any data that needs to be updated. I don’t have any regular data that needs to be processed, stored, logged, or downloaded.

And it feels good. Go back to your roots when you can. Keep yourself grounded, and remind yourself why you started developing in the first place. It’s been surprisingly rejuvenating.

If you want, I’ve put the dice roller library on github. Take a look at it, and let me know what you think.


The crock of Kickstarter

I have backed a total of 6 projects on Kickstarter.

Out of those 6 projects, only one has delivered.

Now I know, without a doubt in my mind, that kickstarter in itself is a crock. Crowd funding doesn’t seem to work unless it’s done by a big company that really doesn’t need the money in the first place. Why? They never get enough money to actually finish their project.

One such project, that was very near and dear to my heart, is Nekro. That quirky, “You are the bad-guy in a Diablo like world” game that had such an interesting art style and set of mechanics, that I’ve played the Early Access release no less then twice with each Nekro.

Nekro concept title image
Nekro concept title image

And now, that project is dead. That is project number 6, the only one that had an actual, playable, product. Gone. They pulled it off steam, shut down the website, and are staying very tight lipped about it, except that there is a he-said/he-said (no she’s involved that I can tell) situation about what to do with the game.

At this point, development has stopped. That much is clear. That the 2 involved are no longer working together is also clear, which means they probably won’t be continuing development on it.

My question is whether or not they are open to the idea that the community can work on it, and release it as a free, open source, game.

I won’t hold my breath, but does anyone remember Warzone 2100? I bought that game way back when it was released. Played it to completion. Then the company behind it went under, and they released all the code and assets under an Open Source License (GPL2 I believe).

Will it happen? Probably not.

But I have always been a dreamer.

Insights to PageSpeed insights

Google is lying to us all.

That may seem like a harsh statement, but it’s very true. Google, with the use of it’s PageSpeed insights tool (https://developers.google.com/speed/pagespeed/insights/) is making hard working developers like myself go crazy optimizing their websites.

Case in point: for the past 2 days, I’ve been trying to increase a client’s pagespeed score. I’ve even gone so far as to write a CSS caching mechanism in PHP which combines and minifies all of the CSS used by the site.

And now I can choose to either compress or cache the result.

No matter what I seem to try, with the combination of nginx and apache included, to both compress and cache this generated file. I’ve been able to get a score of 79/100 when using caching, or 83/100 when using compression. But when I use one method, it complains about the other not being used.

And then, there is the always-a-pain-in-the-ass, “Eliminate render-blocking JavaScript and CSS in above-the-fold content” problem.

I got to the point of running PageSpeed insights on www.google.com, and now, I don’t feel so bad. You see, Google’s own main website, that simple and basic Google Search page, only get’s a 64/100 for the mobile tab.

Google's PageSpeed Insights for Google.
Google’s PageSpeed Insights for Google.

So, web developers of the world trying to measure up to that supposed impossible to reach 100/100 for speed, don’t feel bad.

Chances are, you are doing better then Google themselves.