So about that switch….

I did it! I moved my primary home server to FreeBSD 11, running with zfs, and managed to keep all of my home services running without issue.

Except there was an issue.

I have no idea why, but it seems that the network scheduler on FreeBSD does not play nicely with Emby. Whenever I tried to watch my media while away on vacation, I go horrible lag and terrible streaming playback.

I’m not expecting much, but have more then enough bandwidth to support a 1080p h.264 stream from anywhere. It only ever happened on FreeBSD, as CentOS never had this issue. Just to test it, I put CentOS back in place on the server, and it is happily running without needing so much as a kick when it comes to streaming, email, system backups and PXE.

Sorry to say, FreeBSD left a lot to be desired in the performance department.

It does make me think that FreeBSD deserves a closer look as a base operating system for other use cases. I’m still considering it for running some of my websites, but will have to run some more tests, and hopefully can find a solution in the end.

Pondering a Switch

It’s no secret that I’m an open source nut. I love Linux, and use it exclusively as my desktop operating system, and my server operating systems.

The great thing about open source is the choice. There are choices for operating systems, choice for web browsers, choices for timers, choices for pretty much any type of software you need! I’m thinking more about the first choice though.

I’m thinking of FreeBSD for my servers.

Don’t get me wrong! I have been using CentOS since version 4, and have had Debian and Ubuntu servers before (but they never lasted that long before the switch back to CentOS). I love the way SELinux is integrated, and always ignored the “disable SELinux” part of any documentation I used to help learn a new aspect of the operating system (seriously, never disable SELinux).

But, I’ve been learning a lot about FreeBSD over the past few years, and have even setup a couple storage appliances (for fun mainly) using FreeBSD 10.1 (at the time) and recently FreeBSD 11. It’s a great operating system. Works extremely well, and I haven’t found much of a learning curve compared to CentOS for my storage needs (ZFS on both may have helped), but haven’t tried replacing my home server with it before. My wife would kill me if she loses access to her photos, music and movies that I have stored on CentOS 7 right now, so I haven’t made the jump.

But now, I’m tempted. Let me explain.

I’ve been using Docker containers to run some software on my CentOS rig. Mainly Emby (don’t trust anything that connects outside of the home), a transmission torrent daemon (serving mostly ISO’s of operating systems, including CentOS and FreeBSD), and MPD for streaming music anywhere in the house (that’s another post, if anyone wants it). Docker is great, and works very well for what it is.

But Docker containers aren’t FreeBSD jails.

Jails separates the things so much better then Docker does, in what I have tested. It has the ability to completely separate the host system from the jails, and give access only to what it needs, during creation or after creation. Docker makes you define all that up front, which makes it a pain to limit it. For instance, my Emby server is limited to 1GB of memory, my transmission container is set to 1GB of memory, and my MPD container is limited to 1GB of memory. The reason they are all 1GB? I had to set it at container creation, and wanted to make sure things would run smoothly.

Jails on the other hand can have their limits modified during runtime with the rctl command, which means I can play with resource limits without having to shut down the jails all the time!

Jails aren’t the only reason though. The other reason is native ZFS support. While you can use ZFS on CentOS, thanks the the zfsonlinux project, and open-zfs, you can never be sure when a system update will break your ZFS install and require a great deal of manual intervention. That was the case with CentOS 7.4 release. ZFS broke, which means my data wasn’t accessible (music, pictures and movies) while I was fixing the issue. 3 reboots later and it was back to normal, but still, it was a pain in the ass when I have other things I need to do.

I’m still experimenting, and haven’t made a hard decision yet, which is partly why I wrote this post. I need help with the pros and cons. Please leave me a message and give me your opinion.

Random overload!

What is happening to my server?!

I run a bunch of servers, most of them tiny and used for redundancy, but one that is central to my business. Lately, I’ve been noticing that the main server is consistently being bombarded by high CPU usage and multiple crashes, which, when it’s the primary email handler for my clients to access their incoming email, is a big problem.

Full disclosure with this: I’m not going to give away much in the way of details (log messages and such) as I don’t want to risk any privacy breaches with my clients data.

Where for art thou logs?

Seeing as how I’ve never run into that problem before on CentOS 7, or CentOS 6 in the years I’ve been using them both, I started investigating. Problem is, I couldn’t find any reason for WHY the system kept crashing. In my ignorance, I turned to the guys over at the Sysadministivia to get some advice. (Those guys are great by the way).

Brent got back to me really quickly and told me that CentOS 7 doesn’t store the journal persistently by default! How crazy is that?! After turning on persistent journaling (they even told me how to do that) and remote logging (rsyslog is still amazing), so I could at least go through the logs next time the problem happened.

An excerpt from Brent’s response to me:

Before we go into anything else, I should note that the default CentOS 7 behaviour for journald is "auto" storage, meaning: log to volatile memory (RAM) if the directory /var/log/journal does NOT exist (and it
doesn't, in default cases). If you want persistent logging (and it sounds like you do), you can either:

- uncomment "#Storage=auto" in /etc/systemd/journald.conf and change to
"Storage=persistent" (in which case it will force-create the directory if it doesn't exist), OR

- simply just mkdir -p /var/log/journal

The Problem is found!

When it did happen again, about a week later, I was able to examine all of my logs, and lo and behold, found so many references to PHP FCGI processes crashing from a lack of resources (DDoS), always from the same IP range (who knew Russia was so interested in my Small Business?) that I was able to just mass drop of all packets and requests coming from them. If I didn’t keep my systems patched religiously, I would be in much bigger trouble right now!

Lessons I learned on this issue:

  • Ask experienced people for advice when you need it. I don’t have any kind of formal training in Systemd and Journald, so was very confused why I couldn’t examine my logs using the provided journalctl tools.
  • Having the redundant servers backing my primary server is great, as it kept all of my services running without issues.
  • Keeping all of my servers updated and patched daily, in sequence so I never have an outage, is a great way to run small business servers.

Next is playing with a better, more automated way of updating and rebooting my servers in sequence.

Thanks again to the guys at Sysadministrivia for guiding me in being able to actually get the information I needed to fix the issue. If you want to hear their comments about it, check it out at their website, or follow the link to S2E3: Ass-Backwards Passwords, and check their show notes for their response to my email.

Taking a rest from RESTful

I have a lot of projects on the go. As I’m sure most developers do, I get curious and decide I need to try and make something that’s been done a hundred times before.

What I have I done that’s been done a hundred times before? Rolled some dice. It’s been done physically, and in almost every programming language I can think of. I did it in high school in QBasic. I did it in college in C, C++ (using OOP methodology), Fortran, and even COBOL. This time, I did it in PHP.

Why did I put myself through the bother of creating a dice roller in PHP that’s been done a hundred times before? The short answer is because it gave me something to do that has nothing to do with any paying project I have on the go right now.

The long answer is that I miss playing D&D. I used to play at least twice a month, over skype, with my brother in-law and a friend of his. It was a small crew, my brother in-law was both the DM and the fighter of the party. His friend was a warlock, who was completely obsessed with finding new books. Not spell books mind you, but books. Last time we played, he had to leave his pack behind while we went off to talk to a Dragon, as he was afraid his books would get burned. I was the unbalanced Moon Elf wizard, who suffered a traumatic brain injury as an apprentice. This basically caused my character to roll randomly on a chart for every decision that had to be made. This has resulted in some pretty funny and scary situations, but my party has adapted and sometimes casts silence on me to prevent me from saying something stupid.

Anyhow, back to the reason for the dice roller in PHP.

We haven’t been able to play for a long time. We are all quite busy in life, and keep missing the opportunity to pick up the game again. Part of the issue is the fact that we need to be rolling so much, so our 3 hour sessions sometimes turn into 6 hours while we figure out what should actually happen and all the dice results to come back.

By making a new dice system, I’ve started the building blocks of being able to make faster decisions, publicly available to anyone who logs into a game server. This may be a small part of a much larger project that I’ve started for myself, but it’s one I enjoy working on.

Originally, I was going to turn this all into a restful system that can be plugged into any system (including mobile apps) to have dice rolls done based on anything that is a valid D&D dice string. In other words, send the server the phrase “2d6+3” and you will get a value anywhere from 5 to 15.

I’ve decided that creating a complete RESTful interface to ask for a dice roll was a little much. Instead, I’ve created a library that can be used in any PHP application (even a RESTful one if you want) to roll the dice in any project. I’ve gone back to my roots of creating libraries that can be plugged into applications, instead of the current method that is popular of making everything a service. For this project at least, I’m taking a break from REST. I don’t have any data that needs to be updated. I don’t have any regular data that needs to be processed, stored, logged, or downloaded.

And it feels good. Go back to your roots when you can. Keep yourself grounded, and remind yourself why you started developing in the first place. It’s been surprisingly rejuvenating.

If you want, I’ve put the dice roller library on github. Take a look at it, and let me know what you think.

The crock of Kickstarter

I have backed a total of 6 projects on Kickstarter.

Out of those 6 projects, only one has delivered.

Now I know, without a doubt in my mind, that kickstarter in itself is a crock. Crowd funding doesn’t seem to work unless it’s done by a big company that really doesn’t need the money in the first place. Why? They never get enough money to actually finish their project.

One such project, that was very near and dear to my heart, is Nekro. That quirky, “You are the bad-guy in a Diablo like world” game that had such an interesting art style and set of mechanics, that I’ve played the Early Access release no less then twice with each Nekro.

Nekro concept title image
Nekro concept title image

And now, that project is dead. That is project number 6, the only one that had an actual, playable, product. Gone. They pulled it off steam, shut down the website, and are staying very tight lipped about it, except that there is a he-said/he-said (no she’s involved that I can tell) situation about what to do with the game.

At this point, development has stopped. That much is clear. That the 2 involved are no longer working together is also clear, which means they probably won’t be continuing development on it.

My question is whether or not they are open to the idea that the community can work on it, and release it as a free, open source, game.

I won’t hold my breath, but does anyone remember Warzone 2100? I bought that game way back when it was released. Played it to completion. Then the company behind it went under, and they released all the code and assets under an Open Source License (GPL2 I believe).

Will it happen? Probably not.

But I have always been a dreamer.

Insights to PageSpeed insights

Google is lying to us all.

That may seem like a harsh statement, but it’s very true. Google, with the use of it’s PageSpeed insights tool ( is making hard working developers like myself go crazy optimizing their websites.

Case in point: for the past 2 days, I’ve been trying to increase a client’s pagespeed score. I’ve even gone so far as to write a CSS caching mechanism in PHP which combines and minifies all of the CSS used by the site.

And now I can choose to either compress or cache the result.

No matter what I seem to try, with the combination of nginx and apache included, to both compress and cache this generated file. I’ve been able to get a score of 79/100 when using caching, or 83/100 when using compression. But when I use one method, it complains about the other not being used.

And then, there is the always-a-pain-in-the-ass, “Eliminate render-blocking JavaScript and CSS in above-the-fold content” problem.

I got to the point of running PageSpeed insights on, and now, I don’t feel so bad. You see, Google’s own main website, that simple and basic Google Search page, only get’s a 64/100 for the mobile tab.

Google's PageSpeed Insights for Google.
Google’s PageSpeed Insights for Google.

So, web developers of the world trying to measure up to that supposed impossible to reach 100/100 for speed, don’t feel bad.

Chances are, you are doing better then Google themselves.