Earlier this afternoon, my server was upset. At 15:57
, a duo of IP addresses begun making rapid and repeated POST requests to an auxiliary component of WordPress, forcing apache to begin consuming significant amounts of system memory. Disappointingly this went undetected, and less than half an hour later, at 16:24
, the system ran out of memory, invoked the OOM killer and terminated mysqld
. Thus at 16:24
, denial of service to all applications requiring access to a database was successful.
Although the server dutifully restarted mysqld
less than a minute later, the attack continued. Access to apache
was denied intermittently (by virtue of the number of requests) and the OOM killer terminated mysqld
again at 16:35
. The database server daemon was respawned once more, only to be killed just short of half an hour later at 17:03
.
It wasn’t until 17:13
that I was notified of an issue, by means of a Linode anomaly notification, disk I/O had been unusually high for a two hour period. I was away from my terminal but used my phone to check my netdata
instance. Indeed I could confirm a spike in disk activity but it appeared to have subsided. I had run some scripts and updates (which can occasionally trigger these notifications) in the previous two hours so assumed causation and dismissed the notification. Retrospectively, it would be a good idea to have some sort of check list to run through upon receipt of such a message, even if the cause seems obvious.
The attack continued for the next hour and a half, maintaining denial of the mysqld
service (despite the respawner’s best effort), at 18:35
(two and a half hours after the attack began) I returned from the field to my terminal and decided to double check the origin of the high disk I/O. I loaded the netdata
visualiser (apache
seemed to be responsive) and load seemed a little higher than usual. Disk I/O was actually higher than usual, too. It would seem that I had become a victim of y-axis scaling; the spike I had dismissed as a one-off burst in activity earlier had masked the increase in average disk I/O. Something was happening.
I checked system memory, we were bursting at the seams. The apache
process was battling to consume as much memory on the system as possible. mysqld
appeared to be in a state of flux, so I tried to reach database backed applications; Phabricator, and my blog – both returned some form of upset “where is my database” response. I opened the syslog
and searched for evidence that the out of memory killer had been swinging its hammer. At this point I realised this was a denial of service.
I located the source of the high disk I/O when I opened the apache
access log. My terminal spewed information on POST
requests to xmlrpc.php
aimed at two WordPress sites hosted on my server. I immediately added iptables
rules for both IP addresses, and two different IPs from the same block took over the attack. I checked the whois
and discovered all the origin IPs were in the same assigned /24
block, so I updated iptables
with a rule to drop traffic from the whole block. The requests stopped and I restarted the seemingly mangled mysqld
process.
I suspect the attack was not aimed at us particularly, but rather the result of a scan for WordPress sites (I am leaning towards for the purpose of spamming). However I was disappointed in my opsec-fu, not only did I prevent this from happening, but I failed to stop it happening for over two hours. I was running OSSEC
, but any useful notifications failed to arrive in time as I had configured messages to be sent to a non-primary address that GMail must poll from intermittently. A level 12 notification was sent 28 minutes after the attack started as soon as the OOM was invoked for the first time, but the message was not pulled to my inbox until after the attack had been stopped.
The level of traffic was certainly abnormal and I was also frustrated that I had not considered configuring fail2ban
or iptables
to try and catch these sort of extreme cases. Admittedly, I had dabbled in this previously, but struggled to strike a balance with iptables
that did not accidentally ban false positives attempting to use a client’s web application. Wanting to combat this happening in future, I set about to implement some mitigations:
My first instinct was to prevent ridiculous numbers of requests to apache
from the same IP being permitted in future. Naturally I wanted to tie this into fail2ban
, the daemon I use to block access to ssh
, the mail servers, WordPress administration, and such. I found a widely distributed jail configuration for this purpose online but it did not work; it didn’t find any hosts to block. The hint is in the following error from fail2ban.log
when reloading the service:
fail2ban.jail : INFO Creating new jail 'http-get-dos' ... fail2ban.filter : ERROR No 'host' group in '^ -.*GET'
The regular expression provided by the filter (failregex
) didn’t have a ‘host’ group to collect the source IP with, so although fail2ban
was capable of processing the apache
access.log
for lines containing GET
requests, all the events were discarded. This is somewhat unfortunate considering the prevalence of the script (perhaps it was not intended for the combined_vhost
formatted log, I don’t know). I cheated and added a CustomLog
to my apache
configuration to make parsing simple whilst also avoiding interference with the LogFormat
of the prime access.log
(whose format is probably expected to be the default by other tooling):
LogFormat "%t [%v:%p] [client %h] \"%r\" %>s %b \"%{User-Agent}i\"" custom_vhost CustomLog ${APACHE_LOG_DIR}/custom_access.log custom_vhost
The LogFormat
for the CustomLog
above encapsulates the source IP in the same manner as the default apache
error.log
, with square brackets and the word “client”. I updated my http-get-dos.conf
file to provide a host group to capture IPs as below (I’ve provided the relevant lines from jail.local
for completeness):
I tested the configuration with fail2ban-regex
to confirm that IP addresses were now successfully captured:
$ fail2ban-regex /var/log/apache2/custom_access.log /etc/fail2ban/filter.d/http-get-dos.conf [...] Failregex |- Regular expressions: | [1] \[[^]]+\] \[.*\] \[client <HOST>\] "GET .* | `- Number of matches: [1] 231 match(es) [...]
It works! However when I restarted fail2ban
, I encountered an issue whereby clients were almost instantly banned when making only a handful of requests, which leads me to…
This took some time to track down, but I had the feeling that for some reason my jail.conf
was not correctly overriding maxretry
– the number of times an event can occur before the jail action is applied, which by default is 3
. I confirmed this by checking the fail2ban.log
when restarting the service:
fail2ban.jail : INFO Creating new jail 'http-get-dos' ... fail2ban.filter : INFO Set maxRetry = 3
Turns out, the version of the http-get-conf
jail I had copied from the internet into my jail.conf
was an invalid configuration. fail2ban
relies on the Python ConfigParser
which does not support use of the #
character for an in-line comment. Thus lines such as the following are ignored (and the default is applied instead):
maxretry = 600 # 600 attempts in findtime = 30 # 30 seconds (or less)
Removing the offending comments (or switching them to correctly-styled inline comments with ‘;’) fixed the situation immediately. I must admit this had me stumped and seems pretty counter-intuitive especially as fail2ban
doesn’t offer a warning or such on startup either. But indeed, it appears in the documentation, so RTFM, kids.
Note that my jail.local
above has a jail for http-post-dos
, too. The http-post-dos.conf
is exactly the same as the GET counterpart, just the word GET
is replaced with POST
(who’d’ve thought). I’ve kept them separate as it means I can apply different rules (maxretry
and findtime
) to GET
and POST
requests. Note too, that even if I had been using http-get-dos
today, this wouldn’t have saved me from denial of service, as the requests were POST
s!
As mentioned, OSSEC
was capable of sending notifications but they were not delivered until it was far too late. I altered the global ossec.conf
to set the email_to
field to something more suitable, but when I tested a notification, it was not received. When I checked the ossec.log
, I found the following error:
ossec-maild(1223): ERROR: Error Sending email to xxx.xxx.xxx.xxx (smtp server)
I fiddled some more and in my confounding, located some Relay access denied
errors from postfix
in the mail.log
. Various searches told me to update my postfix
main.cf
with a key that is not used for my version of postfix
. This was not particularly helpful advice, but I figured from the ossec-maild
error above that OSSEC
must be going out to the internet and back to reach my SMTP server and external entities must be authorised correctly to send mail in this way. To fix this, I just updated the smtp_server
value in the global
OSSEC
configuration to localhost
:
<ossec_config> <global> <email_notification>yes</email_notification> <email_to>[email protected]</email_to> <smtp_server>localhost</smtp_server> <email_from>[email protected]</email_from> </global> ...
WordPress provides an auxiliary script, xmlrpc.php
which allows external entities to contact your WordPress instance over the XML-RPC
protocol. This is typically used for processing pingbacks (a feature of WordPress where one blog can notify another that one of its posts has been mentioned), via the XML-RPC pingback API, but the script also supports a WordPress API that can be used to create new posts and the like. For me, I don’t particularly care about pingback notifications and so can mitigate this attack in future entirely by denying access to the file in question in the apache
VirtualHost
in question:
<VirtualHost> ... <files xmlrpc.php> order allow,deny deny from all </files> </VirtualHost>
1557 (+0'00")
: POSTs aimed at xmlrpc.php
for two WordPress VirtualHost
begin1624 (+0'27")
: mysqld
terminated by OOM killer1625 (+0'28")
: OSSEC
Level 12 Notification sent1625 (+0'28")
: mysqld
respawns but attack persists1635 (+0'38")
: mysqld
terminated by OOM killer1636 (+0'39")
: mysqld
respawns1700 (+1'03")
: OSSEC
Level 12 Notification sent1703 (+1'06")
: mysqld
terminated by OOM killer1713 (+1'16")
: Disk IO 2-Hour anomaly notification sent from Linode1713 (+1'16")
: Linode notification X-Received
and acknowledged by out of office sysop1835 (+2'38")
: Sysop login, netdata
accessed 1837 (+2'40")
: mysqld
terminated by OOM killer, error during respawn1839 (+2'42")
: iptables
updated to drop traffic from IPs, attack is halted briefly1840 (+2'43")
: Attack continues from new IP, iptables
updated to drop traffic from block1841 (+2'44")
: Attack halted, load returns to normal, mysqld
service restarted1842 (+2'45")
: All OSSEC
notifications X-Received
after poll from serverPOST
requests originate from IPs in an assigned /24
blockwhois
record served by LACNIC (Latin America and Caribbean NIC) traceroute
shows the connection is located in Amsterdam (10ms away from vlan3557.bb1.ams2.nl.m24
) – this is particularly amusing considering the whois
owner is an “offshore VPS provider”, though it could easily be tunneled via Amsterdamxmlrpc.php
endpoints that could be abused for automatic DOS apache
stability for ~3 hoursmysql
for ~2.25 hoursapache
OSSEC
configured to deliver notifications to non-primary address causing messages that would have prompted action much sooner to not arrive within actionable timeframenetdata
instance immediately helped narrow the cause down to apache
based activityOSSEC
reconfigured to send notifications to an account that does not need to poll from POP3 intermittentlyGET
and POST
jails to fail2ban
configuration to try and mitigate such attacks automatically in futureOSSEC
notification smtp_server
to localhost
to bypass relay access denied
errorsfail2ban-regex <log> <filter>
to test your jails#
for inline comments in fail2ban
configurations, the entire line is ignoredGET
attacks, have you forgotten POST
?WordPress prides itself on its famous five minute install, so I figured even if I decided to abort the migration immediately after installation, I’d not have wasted much of my time. Indeed, the steps are simple and it takes almost no time to download and unzip the source, generate some database credentials, edit the sample configuration and activate a new Apache VirtualHost
. The longest part of the installation was impatiently waiting for the new DNS zone to propagate.
Well, excluding the time it took for me to realise that the reason the domain was displaying the default VirtualHost
was that it had been configured for :443
, not :80
. Oops…
I followed the prompts, created a user and I was automatically logged in to my new blog. Tada! I was done.
One of the main reasons I’d been less inclined to install WordPress was its reputation for poor security. I can’t quantify whether this is because the code base is actually hacky or poor1, or whether it is merely a victim of its own popularity and its code base is under more forensic scrutiny from would-be attackers.
I suspect it’s a little of both, but end-users themselves are often to blame for the repercussions. WordPress and its plugins are frequently updated (almost annoyingly so) and yet outdated versions of WordPress litter the web, ripe for the picking. Keeping on top of these updates is a first and critical step towards maintaining security.
Judging by my Apache access logs, the main threat to a WordPress installation that isn’t particularly sensitive, is automated brute forcing of administrator account logins2. For some reason, in 2015, out-of-the-box WordPress has no ability to throttle or temporarily deny a user access to the wp-admin
login page following multiple failed logins, despite this appearing to be a major attack vector.
Now, when I set-up my first VPS, I found several helpful guides published by Linode. One such guide, for server security, described the installation and configuration of fail2ban
, an excellent tool that monitors various system logs and drops traffic from IP addresses that appear to be acting suspiciously. Helpfully, someone has written a WordPress fail2ban plugin which uses the server’s LOG_AUTH
notification mechanism to append to the system’s auth log. An additional rule (called a “jail”) is appended to fail2ban.conf
, specifying a filter (provided by the plugin) that is responsible for parsing relevant log lines and flagging suspicious behaviour – in this case, failed login attempts, triggering a ban.
I would imagine (and hope) for a small-fry blog like mine, where intrusion provides no real gain to an assailant other than my inconvenience (though I suppose automatically pwning any boxes you can for a bot net is desirable), that deployment of fail2ban
in this fashion will effectively eliminate risk from the most likely sources (automated scanning tools). Indeed, in just over four days, since setting up the jail, 50 IPs have been banned for failing to enter a valid administration password. Of course, it probably helps further that my credentials were generated by a password manager and so are less likely to be guessed by brute-force anyway.
IP banning aside, the WordPress Codex also has a nice article on hardening WordPress installations which covers a few other topics, like using Apache RewriteRules
to protect the wp-includes
directory, ensuring correct file permissions, securing the wp-config.php
and limiting database privileges3. It also briefly name drops OSSEC: an open source intrusion detection system that happens to be pretty awesome.
Equipped with a bare bones WordPress blog, complete with an example post and comment from “Mr. WordPress”, I first tasked myself to replicate the functionality provided by my GitHub pages blog. After all, if I really was going to migrate, I’d need to confirm that my new posts could be afforded like-for-like functionality with old ones.
At the top of my agenda was syntax highlighting. Code snippets are a vital part of sharing tips and fixes, as well as describing how exactly computations were allowed to go badly wrong. I briefly searched around for appropriate plugins before settling on Crayon Syntax Highlighter. Installation should have been simple, but the automatic plugin installation failed repeatedly with a vague permissions error.
I spent the best part of half an hour going around in circles, altering various directory and file permissions and groups, and going so far as touch
ing the files myself to no avail. The Apache error log provided more detail but nothing helpful:
PHP Warning: file_put_contents(ssh2.sftp://Resource id #167/█████████████████████████████████████████/wp-content/upgrade/crayon-syntax-highlighter/crayon-syntax-highlighter/langs/c#/statement.txt): failed to open stream: operation failed in class-wp-filesystem-ssh2.php on line 181
After checking I was able to install other plugins, I had a hunch and looked for a bug to confirm my suspicions: I think the installation fails due to the hash symbol in the file path being interpreted as a special character. After some digging I found an old bug that describes this exact issue, but was reported as fixed back in 2012. I’d wasted enough time here and at the risk of not quite solving the mystery and just getting up and running, I gave up on the automated installation and downloaded and unzipped the plugin to the necessary directory myself.
FD Footnotes is a clean and simple plugin for footnotes. Installation was trivial but unfortunately the syntax was not the same as used in my previous posts, which will undoubtedly add complexity for post transfer later.
I added comments to my previous blog with Disqus. This proved convenient as migration was simply a case of installing and configuring the WordPress Disqus plugin. As I was using a temporary URL, I appended this to the list of trusted hosts on the Disqus administration page.
I’ve become accustomed to writing with Markdown and was disappointed to find that the WordPress editor does not support it by default. A brief Google search suggested the Jetpack plugin; a disturbingly overpowered plugin from the same people who host wordpress.com
. Along with Markdown support, features include; additional visitor statistics, social content sharing buttons, enhanced security, a Latex plugin (very handy), shortlinks, and a subscription system.
When I migrated Vic’s blog from wordpress.com
, an importer plugin made the process relatively hassle free and I was hoping to find a similar plugin for importing my Jekyll posts. I found an RSS importer that failed to work with my atom.xml
and another plugin that could parse the website directly failed to import posts also. I turned to scripting and the best I could find was a hacky looking PHP script that while useful, would still need manual editing to handle images, code samples, footnotes and intra-blog links. In the end, I decided to just re-create each post manually. How hard could it be?
The process was simple for my small quote and image posts but a very painful experience for my full-fledged technical posts. Copying the Markdown directly, I was surprised to find that code snippets and footnotes worked without intervention, but newline characters were incorrectly interpreted as paragraph breaks and there was a mess of ampersands, greater and less than symbols incorrectly rendered in the text as their corresponding HTML entities. I needed something more clever.
I tried using pandoc
to convert the Markdown to HTML but this destroyed code blocks by adding syntax highlighting inside of span tags. I opted to instead convert to a slightly different flavour of Markdown that appeared more acceptable to the visual editor. Posts still required a significant amount of intervention to correct mistakes in the new formatting, update images and links.
pandoc --atx-headers --normalize -f markdown -t markdown_github+footnotes in.md > out.md
Errors in parsing frequently caused code snippets to appear without formatting and it took me a while to notice spurious escape characters preceding underscores, less and greater than symbols. Switching between the Visual and Text editor modes just once would mangle all indentation inside code blocks.
I spent the best part of five hours converting and correcting just under 40 posts. I regret this course of action.
Content migration aside, I’ll enumerate a few positives and negatives I’ve encountered over my first few days of use:
Media insertion
As mentioned in my last post, embedded media was previously a pain. I almost exclusively used the Github web interface to author posts, which doesn’t currently allow for arbitrary file uploads. Uploading an image would be done on my local machine via a Github commit and I’d have to manually craft the image tags myself as necessary. The WordPress editor on the other hand has a nifty upload and image manager tool. It was also quite easy to fix broken images after importing my content.
Better web based visual editor
Github’s web editor is handy but not intended for this purpose, the WordPress editor affords more writing-specific functionality when authoring posts.
Linking to previous posts
Jekyll allowed for intra-blog links with a special post tag which was useful (as one doesn’t need to write the full HTML for an a
tag), but not completely intuitive as I didn’t know the names of my previous posts off the top of my head. Here I can click the create link button and select from my post list, or provide an external URL.
Categories and tagging
I post several different categories of information to my blog and would like to demarcate them more obviously for readers. Tags also provide a handy way to move through previous posts that cover similar topics in the same category. Whilst more than possible with Jekyll via numerous plugins, Github pages allow support a very narrow subset of the plugins available.
Plugins
There is a whole world of WordPress plugins available for a variety of tasks. Plugins are typically quick and simple to install and configure.
Post preview
I can preview and save posts without needing to commit half finished drafts.
Automatic URLs, slugs, pretty permalinks
These are just a few bits and bobs that are taken care of automatically for me, making the authoring process just that little bit more streamlined.
Tables
The editor does not seem to support tables by default, I’ll have to select one of the many plugins. However, Markdown tables are correctly parsed for post display, tables are just not friendly to create and edit in the editor.
Plugins
Plugin organisation and management leaves a lot to be desired. Searches don’t offer any sorting or filtering, and there are usually many, many plugins that achieve the same task to with varying degrees of support and success.
Themes and templates
Editing themes is quite a frustrating endeavor, requiring edits to various PHP and CSS files. The default themes use too many @media
CSS rules for browsers of various sizes, which I find increases the difficulty in ensuring a uniform interface across devices when making changes to attributes.
No Markdown
I was disappointed that a plugin was needed for Markdown, especially as the most recommended solution is quite a vast plugin that unnecessarily integrates my blog with a wordpress.com
account for various other features. Even with Markdown support, there is no syntax highlighting for it in the editor.
Less Control
WordPress obscures a lot of the markup process and I have already found it putting tags where I don’t want them with no way to circumvent it. There is an awful lot of crap in the headers and footers, though part of this is for improved indexing in search engines which is one of the reasons I switched in the first place.
Post migration
I said content migration aside, but it was such an awful and frustrating process that I encourage you to think twice about how you will get your old posts imported if you are planning to do this yourself.
Sidetracked by how simple the installation process was, I’d clearly underestimated the work that was required post-install for actually getting the blog migrated with the same functionality, content and design.
That said, overall, I think I’m happy with the migration currently – the pros narrowly outweigh the cons and hopefully most of the effort expended is a one-time-only initial set-up deal. At the very least, it’s easier to get things out of WordPress, than in. I’m looking forward to a more streamlined authoring process from this point on.
Though, I think if I’d known how painful content migration was going to be (I expected having to do some conversion and maybe manual input of metadata, but was not expecting the conversion to be so flaky), I would definitely have thought twice about whether my time could have been better spent on something else. With all this in mind, I’d like to suggest a new installation tagline for the WordPress team:
]]>WordPress: Five minutes to install, a lifetime to configure.