Archive for the ‘Tech’ Category

You’re probably here, because you’re getting an error like this:

03/14 17:29[root@admin1-stage ca]# ./sign-csr doug_fresh
Using configuration from /ebs/openvpn/ca/openssl.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName           :PRINTABLE:'US'
stateOrProvinceName   :ASN.1 12:'California'
localityName          :ASN.1 12:'San Francisco'
organizationName      :ASN.1 12:'CompanyX'
commonName            :ASN.1 12:'Doug E. Fresh'
emailAddress          :IA5STRING:'doug@awesome.domain'
The stateOrProvinceName field needed to be the same in the
CA certificate (California) and the request (California)
03/14 17:29[root@admin1-stage ca]#

But, the field DOES match you say! Look! It says it right there! How could this be!?!?!?

Well, here’s the fix you’re looking for. Open up the openssl.cnf file on the client generating the CSR, and look for string_mask, replace “utf8″, with “nombstr”, and then generate a new CSR:

dpeters@MuckTop530:/etc$ cat /etc/ssl/openssl.cnf | grep string_mask
string_mask = nombstr
#string_mask = utf8only


Your mileage may vary on the exact location of your openssl.cnf, but find it, and change it.

Some more explanation, as I understand it – In 1999, a bunch of Dudes said that come 2003, everyone has to start encoding these common fields in SSL certs as UTF8 (AKA ASN1) starting in 2003. Well, 2003 rolled around, and nobody really paid attention to that rule. openssl 1.0 was delayed for a bazillion years, and people just patched the heck out of openssl 0.9.7 and 0.9.8. Well, apparently with openssl 1.0 it now defaults to encoding these strings in UTF8 (AKA ASN.1) . I’ve only observed the post openssl 1.0 link anecdotally, I’m not sure if this was an official policy change or not. Hey better 10 years late than never right?


My gmail box is full. Well Almost.

I’m at 24.5 gigs! I think it goes without saying that I get a lot of email. I’ve always been an email packrat, but when I switched to gmail, I latched onto the “gmail method”, which is to archive instead of delete. So as a result, my email storage space has skyrocketed. I’ve been close to running out of space for a couple years now, but I’ve always just gone through and deleted repetitive and chatty threads, large attachments I don’t need anymore, etc. But at some point, the low hanging fruit is gone. I’m at that point.

I would like to migrate all of my old email to a dedicated “archive” account so I can horde use it for reference later. I tried using the Google Apps Exchange Migration tool, but it kept timing out. So did the mail fetcher utility provided by google — which btw ONLY works using POP3, and doesn’t transfer labels. I also tried using Thunderbird to download all my mail and copy it over, but my 1 million+ emails caused the thunderbird UI to puke whenever I tried to do ANYTHING.  I spent hours on the phone with google support, and nobody there could help me. They tried, and they called me back a whole bunch of times, but they were powerless — which was a real eye opener for me. I assumed they would have a magic button they could click, and *poof* all my email would be migrated to another account.

Anyway, I found an application named GMVault. It will slowly and methodically download all your mail, and store it in a giant non-database (btw, it runs on linux, windows, and mac OS X — Score for interopability!). It took about a week to download all my mail, but its down. So now I go to upload it, and I get the error:

Invalid Arguments: Label name is not allowed: Migrated

Obviously at one point I had migrated mail INTO gmail (I can’t blame my email hording exclusively on the gmail archive feature — I migrated mail from the 90’s IN to gmail… so clearly I already had a problem — I also have all my Yahoo! IM, ICQ, and IRC logs going back to the early 90’s — yes you read that right – ICQ and IRC logs from the 90’s.. I have a problem). Anyway, so apparently Gmail has the exclusive worldwide rights on assigning things the “Migrated” label — a simpleton like you or I cannot do such a thing.. and so when I go to restore to my new archive account using gmvault, I get the above error message.

Anyway, on to how I fixed it. I certainly didn’t want to spend another week downloading all my mail again (btw, while I was downloading all my mail, I noticed the UI for my gmail account got much slower — which was already slower than all my friends without a archive addiction). The fix is really easy actually since gmvault stores all of the metadata for every single email in a file with a .meta extension! Score for simplicity!

find ~/gmvault-db/db -type f -name '*.meta' -exec grep -rl "labels\":.*\"Migrated\".*thread_ids" {} \; | xargs sed -i "" 's/\"Migrated\"/\"MigratedAgain\"/'

Thats it! You’ve now renamed your “Migrated” label to “MigratedAgain”.  I was using my mac mini to run gmvault, and running this command took about an hour to recursively search through my million mail archive. Much faster than re downloading. P.S. I’ve noticed in the past that wordpress tends to screw up quotes when pasted into the editor — If you get syntax errors please use your mad skills to try to figure out what I actually meant. I’m sorry. wordpress sucks.

A few notes about the above command:

* I assume you have installed the gmvault db in the default location — which is under your home dir

* I assume you are using linux or Mac OS X — I bet you could make this work with cygwin on windows — but if you are of the cygwin persuasion you probably didn’t need me to suggest that to you.

* It is possible there are some holes in the above regular expression and subsequent search and replace. I didn’t spend an incredibly amount of time reverse engineering the metadata format of gmvault — In fact it took me longer to write this than it did to actually figure out how to work around this issue. I tried it. it worked for me. YMMV.

On another (somewhat related) note. I now have a new method for not running out of space. I forward all of my email to a group called “”, then in that group, I can swap in any account that I want to send my mail to — for example “” Then when 2013 rolls around, you can just create a new account, start sending mail there, and delete everything from 2012 in your primary gmail account. I don’t want to have to do this again.

I wanted Ethernet in my living room. Under the best of circumstances, I was lucky to be able to stream 720p HD movies over wireless, but streaming 1080p was out of the question. Running ethernet to the living room was my first choice, but not convenient. DECA to the rescue! DirecTV Ethernet over Coax Adapter. Its exactly what it sounds like. It allows you to run Ethernet over the existing coax in your house. It’s very similar to MoCA ( Multimedia Over Coax Adapter). I had been looking at the Netgear MCAB1001, but at $90/pair , I started thinking about other options. I looked over at the network rack in my office, and saw a DirecTV provided box with coax on one end, and ethernet coming out the other! A few minutes of google-fu later, and it looks like my DirecTV DECA box is a lot like a MOCA box, but just uses a different frequency. And… They’re only $20 a piece on ebay! However, I couldn’t find a lot of statistics on the DECA. I wasn’t sure what kind of speeds I could expect, and I wasn’t sure if streaming movies would prevent me from using my whole home DVR.

I can now answer some of those questions!

Lets start with what my setup looks like:

* My whole house is wired with RG6

* I have a pretty recent SWM Dish. (Installed December 2010)

* I have 4 DirecTV receivers in the house. 2x HR24 HD DVR’s, and 2xH24 HD receivers (The HD Receivers can watch DVR content from either DVR’s via the Whole Home DVR solution).

* I started with a DECA broadband adapter (Part #DECABBIMRO-01), and then I bought another on ebay. They look like this, but less expensive when bought on ebay. I paid $19 for the first one I bought on ebay, and $12 for the second one.

* I added the new DECA Broadband adapter using a “green label” 2 way splitter. All of the other splitters in the house are also “green label”. If I recall correctly, these are about $5 each.


Setup is pretty self explanatory. Disconnect the Coax from the back of your Receiver/DVR, and put the splitter inline. The splitter has two outputs. The one labelled “DC Power Pass” should go to your DECA Broadband Adapter, and the other should go back to your receiver. Plug the ethernet into a switch on both sides (or into whatever you want, doesn’t have to be a switch), Power on the DECA device, and you are ready to pass packets!

I then ran iperf across the connection.The short version is that I got 100Mbps performance, even when watching HD content streamed from one DVR to another. Latency is very very low also.

I wrote the above about ~4 months ago, and never posted it because I wanted to show my iperf results. Since then I moved my mac mini and no longer have anything on the other end to run iperf to, and keep forgetting to plug my laptop in to test it so I can copy/paste the results. So I’m posting it anyway :) But, trust me, it works great. Latency is slightly higher than when running over standard cheapo switches – but only about 2ms higher. I’d call that a huge success.

Theres still a lot of unknowns here — I only know about my setup. I’m not sure how this would work with RG59 coax, nor do I know how it works with older dishes, or older equipment. But feel free to post in the comments with anything you learn.

Theres also security implications here — though IMHO, much less than with MOCA. In both cases you’re creating a bridge between their network and yours — in the case of MOCA you’re potentially opening up your LAN to the entire neighborhood unless you filter it. With DECA, you’re creating a bridge between your network, and the DirecTV satellite network, which won’t get beyond your satellite dish (since its a receiver only). I already had “their” Broadband adapter in my DMZ before I even realized everything it did. That does mean that I’m at a slight disadvantage in the way I do this, because I only have DMZ access wherever I use DECA’s to extend my LAN — but that’s ok with me.


I spent some time googling around this afternoon, and kept reading that iotop requires kernel 2.6.20 and python 2.6, and therefore was a bit of a challenge to get running on RHEL/CENTOS 5.X. However, the requirements are actually pretty basic, and easy to get running on a somewhat recent update of RHEL 5.

Redhat has backported the per process IO accounting feature to the 2.6.18-144 kernel. So if /proc/self/io exists, and you get results from /proc/<PID #>/io then iotop will work.

[root@gateway /]# cat /proc/self/io
rchar: 1900
wchar: 0
syscr: 7
syscw: 0
read_bytes: 0
write_bytes: 0
cancelled_write_bytes: 0

[root@gateway /]# cat /proc/3227/io
rchar: 25970197
wchar: 26186855
syscr: 4859727
syscw: 2516047
read_bytes: 3854336
write_bytes: 47595520
cancelled_write_bytes: 274432
[root@gateway self]#


Basic Steps for setting up iotop on RHEL 5.7 (And probably older versions as well, though I haven’t tested).

Update Kernel to 2.6.8-144 or higher.

yum update kernel

Install epel repository – There are plenty of instructions out there, but it will look something like this for x64 RHEL:

rpm -Uvh

Then you just need Python 2.4, python-ctypes, and iotop packages:

yum install python python-ctypes iotop

Thats it! Pretty easy huh?


Update 12/27/2011:

I had a case where iotop wouldn’t run on a linux 3.1 kernel, I believe because of mprotect() (Not sure though). In either case, I discovered that htop gets you most of what iotop will get you , plus a lot more. Its a pretty neat tool. I suggest checking it out.  You still need a kernel that has IO statistics in /proc (as mentioned above).



Using root for your MySQL backups is a bad idea boys and girls. You should dedicate a user to doing your backups. Below are a few options for setting up minimum permissions for your dedicated mysql backup user:

Using mysqldump with –opt (or anything else that locks tables)

ON *.*
TO  'MysqlBackupUser'@'localhost'
IDENTIFIED BY 'MySQLBackupUserPassword';

Using mysqldump with –single-transaction

ON *.*
TO  'MysqlBackupUser'@'localhost'
IDENTIFIED BY 'MySQLBackupUserPassword'

Using mysqldump with –single-transaction (–flush-logs) –master-data=1

–flush-logs only requires RELOAD, –master-data requires RELOAD and REPLICATION CLIENT

ON *.*
TO  'MysqlBackupUser'@'localhost'
IDENTIFIED BY 'MySQLBackupUserPassword'

Using ec2-consistent-snapshot – which freezes the file XFS filesystem, while issuing an Amazon EC2 API call to snapshot the EBS Volume. The pertinent commands from the script are:

SYSTEM xfs_freeze -f /vol

Everything except the SHOW MASTER STATUS can be accomplished with RELOAD:

ON *.*
TO  'MysqlBackupUser'@'localhost'
IDENTIFIED BY 'MySQLBackupUserPassword'

If you want to go completely hog wild, and do things like purge binary logs, or you’re concerned that you’ll run out of max_connections you can use SUPER. But be careful, because it also does things like allow you to write to a read_only set server.

ON *.*
TO  'MysqlBackupUser'@'localhost'
IDENTIFIED BY 'MySQLBackupUserPassword'

And in All cases after you make GRANT changes you’ll need to:


I suggest taking a look at AutoMySQLBackup for simple daily, weekly, and monthly mysql backup rotations. It is not the most complete system, but its easy, and works well. It can even email you logs every night if you want. I have mine setup to go to syslog-ng/SEC, where I watch for errors, or the lack of a success.

Starting in 2007, daylight time begins in the United States on the second Sunday in March and ends on the first Sunday in November. On the second Sunday in March, clocks are set ahead one hour at 2:00 a.m. local standard time, which becomes 3:00 a.m. local daylight time. On the first Sunday in November, clocks are set back one hour at 2:00 a.m. local daylight time, which becomes 1:00 a.m. local standard time. These dates were established by Congress in the Energy Policy Act of 2005, Pub. L. no. 109-58, 119 Stat 594 (2005).

Some older cisco IOS routers don’t have the new time zone information. Below is an example of my time related configuration, including NTP and logging options. Configure this and you will no longer be lost when looking at logs! :)

My timezone is obviously Pacific, but you can insert your own. CDT, EDT, CST, EST, etc :)

service timestamps debug datetime localtime
service timestamps log datetime localtime
clock timezone PST -8
clock summer-time PDT recurring 2 Sun Mar 2:00 2 Sun Nov 2:00
ntp logging
ntp update-calendar
ntp server
ntp server
ntp server
ntp server
ntp server

I needed to reset a Foundry Server Iron XL back to factory defaults, and surprisingly couldn’t find the instructions via my buddy google. Foundry is really stingy with with support documents, and knowledge portal acccess, and unless you have a valid support contract, you can’t find ANYTHING. Luckily I do, so I figured I’d share this knowledge for the sake of future googlers.

First, remove the password:

1) Unplug the Switch

2) Plug the switch back in, and be immediately ready to:

3) Hit b, to enter into the boot monitor

4) Type:

no password

boot system flash primary

5) Foundry will boot up, and you can ‘enable’ without being prompted for a password

To reset your ServerIronXL to factory defaults:

6) After enabling, type:

erase start

NOTE: This is permanent! There is no going back! Make sure this is what you want to do! You are resetting to factory defaults (nothing!)

7) Reboot, and enjoy your clean slate

As always, if you find this helpful, please let me know!

Zabbix uses libcurl (libraries, not binaries) to do its Web Scenarios. Web scenarios are very powerful, and allow you to emulate a user experience. Using a Zabbix web scenario, you can emulate logging into your site, accepting the cookie, clicking on something unique (Report showing 10 Last purchases for example), then verify that you get either a particular HTTP code, or that certain text shows up in the response. Way cool stuff. Its got a few kinks to be worked out, however. One very frustrating one is that these web scenarios are not template aware yet… But the zabbix team is working on it, and its going to be a part of a future release. One minor, but significant thing for several of my environments is that the web scenario will error out if the SSL certificate CN ( does not match the URL you accessed the web server with.

But Doug, thats bad practice for the CN to not match the url!

I know! However, in most environments its not uncommon for the internal DNS name to NOT match the external DNS name. For example, the CN name for your SSL cert will be, but internally you have 10 app servers responding as www. You refer to them as,

By default curl (and therefore zabbix) will error out. With the following:

Failed on “HTTPS TEST” [1 of 1] Error: SSL peer certificate was not ok

I’ve written a patch for the zabbix_server binary, which will instruct libcurl to not error out, and life is peachy! You need to unpack the zabbix source, apply the patch, recompile, and install the new binary. The patch, and steps are below:

I’ve attached the patch to this post, I’d suggest downloading it, instead of copying and pasting, but if you’d like to here it is:

--- src/zabbix_server/httppoller/httptest.c     2007-08-20 12:22:22.000000000 -0700
 +++ src/zabbix_server/httppoller/httptest.c.dp  2007-11-13 17:53:54.000000000 -0800
 @@ -318,6 +318,15 @@ static void        process_httptest(DB_HTTPTEST
+       /* Process certs whose hostnames do not match the queried hostname. */
 +       if(CURLE_OK != (err = curl_easy_setopt(easyhandle,CURLOPT_SSL_VERIFYHOST , 0)))
 +       {
 +               zabbix_log(LOG_LEVEL_ERR, "Cannot set CURLOPT_SSL_VERIFYHOST [%s]",
 +                       curl_easy_strerror(err));
 +               (void)curl_easy_cleanup(easyhandle);
 +               return;
 +       }
 httptest->time = 0;
 result = DBselect("select httpstepid,httptestid,no,name,url,timeout,posts,required,status_codes from httpstep where httptest
 id=" ZBX_FS_UI64 " order by no",

Link to Patch: libcurl disable ssl verifyhost

Instructions for installing patch:

Shut down zabbix_server process

/etc/init.d/zabbix_server stop

If you already have your zabbix source unpacked, you can skip the first tar step :) I’ve checked the patch with Zabbix 1.4.1 and 1.4.2

tar -zxvf zabbix-1.4.2.tar.gz

cd zabbix-1.4.2


patch src/zabbix_server/httppoller/httptest.c libcurl_ssl_verifyhost.patch

Then build zabbix_server as normal, for example:

./configure –enable-server –prefix=/usr/local/zabbix –with-mysql –with-net-snmp –with-libcurl

make install

Restart zabbix_server

/etc/init.d/zabbix_server start

Your Internal SSL Web Scenarios should now work! That was easy wasn’t it?

As always, I appreciate any feedback, and would love to hear if this helped you, or if you have any questions! :)

Installing from source on RHEL5 and CENTOS5 is quite painless. Fuse needs to compile a kernel module for your kernel. I started from a minimal install, and did the following:

Update 12/6/2007: There is a bug with more recent updates of RHEL5 (Similar or the same as this bug: The bug will cause the original “yum install” command to fail with the following:

Error: No Package Matching glibc.i686

To prevent that, install glibc first, then install the rest of the stuff you want:

yum install glibc

yum install kernel-devel gcc kernel-headers openssl gcc-c++ openssl-devel boost-devel

Update (2/5/2008): I added boost-devel to the install list, because of an error I encountered installing on CentOS 4.

checking boost/shared_ptr.hpp usability… no
checking boost/shared_ptr.hpp presence… no
checking for boost/shared_ptr.hpp… no
configure: error:
Can’t find boost/shared_.h – add the boost include dir to CPPFLAGS and
rerun configure, eg:
export CPPFLAGS=-I/usr/local/include

Download latest fuse (2.7.1 at this time 10/2007)


Download latest rlog (1.3.7 at this time)


Download latest encFS (1.3.2-1 at this time)



tar -xvzf rlog-1.3.7.tgz

cd rlog-1.3.7



make install

Fuse: lather, rinse

tar -xvzf fuse-2.7.1.tar.gz

cd fuse-2.7.1



make install

encF: and repeat

tar -xzvf encfs-1.3.2-1.tgz

cd encfs-1.3.2



make install

Start fuse:

/etc/init.d/fuse start

Fix init script for CentOS, replace the startup information at the top with:

# chkconfig: 2345 90 10
# description:       Load the fuse module and mount the fuse control
#       filesystem.


chkconfig –add fuse

Create an Encrypted Filesystem (Its not really a filesystem… but I digress) as a test:

encfs /usb/disk1/.crypt-raw /usb/disk1/crypt-mount

It really is that easy. Good luck! :)

If you’re not familiar with linux or open source tools, finding all the dependencies, downloading the source, compiling source, creating the db, etc can be a daunting task. So I’ve created this cut and paste walk through to help you through those steps. Almost everything here is cut and paste, except for hostname, and password information :) You’ll need to provide those on your own. I’ve done my best to make this as accurate as possible. I hate walkthroughs that just aren’t accurate! CentOS was installed choosing zero options, with as base of an installation as it would let me. I used the 2.6.18-8 kernel. If you have any questions, or find any errors, please let me know. And of course as usual, if you find it helpful, also, please let me know :)

I wrote these instructions using 1.4.1 as the example, but theres no reason why 1.4.2 shouldn’t work the same way :)

Update, 11/4/2007: 1.4.2 seems to install its binaries under prefix/sbin instead of prefix/bin, which is different than 1.4.1 which was used for this document. I’ve also noticed that when copying and pasting from this guide some of the whitespace, apostrophies, and dashes (‘ – ) seem to get distorted upon pasting. Its correct in the source, but when its displayed something is munged up. When I figure out what it is, I’ll fix it. In the meantime if you get a syntax error, try retyping what I’ve put on this page instead of copying and pasting. And if you know why its happening, let me know! :)

Install all the necessary pieces. I started with a very base installation of CentOS 5.

yum -y install ntp php php-bcmath php-gd php-mysql httpd mysql gcc mysql-server mysql-devel net-snmp net-snmp-utils net-snmp-devel net-snmp-libs curl-devel mak

Start up the time server. its important for the time between your devices to be in sync.

/etc/init.d/ntpd start

Download fPing, and install it:


rpm -Uvh fping-2.4-1.b2.2.el5.rf.i386.rpm

chmod 7555 /usr/sbin/fping

Create Zabbix user.

useradd zabbix

Download zabbix and untar it.


tar -xzvf zabbix-1.4.1.tar.gz

Start MySQL, and change the root password.

/etc/init.d/mysqld start

/usr/bin/mysqladmin -u root password YourFancyNewRootPassword

Connect to the DB using your newly created root password. Create the zabbix DB, and assign a new user (zabbixmysqluser) with privileges to that DB. You may want to change “zabbixmysqlpassword” to something else. But it should not be the same as any other “critical” password because it will be stored plain text in a config file.

mysql -u root -p

mysql> CREATE DATABASE zabbix;

mysql> GRANT DROP,INDEX,CREATE,SELECT,INSERT,UPDATE,ALTER,DELETE ON zabbix.* TO zabbixmysqluser@localhost IDENTIFIED BY ‘zabbixmysqlpassword';

mysql> quit;

Create the DB Schema

cd zabbix-1.4.1

cat create/schema/mysql.sql | mysql -u zabbixmysqluser -pzabbixmysqlpassword zabbix

cat create/data/data.sql | mysql -u zabbixmysqluser -pzabbixmysqlpassword zabbix

cat create/data/images_mysql.sql | mysql -u zabbixmysqluser -pzabbixmysqlpassword zabbix

./configure –enable-server –prefix=/usr/local/zabbix –with-mysql –with-net-snmp –with-libcurl

make install

make clean

Compile the agent. I chose to compile it staticly, so it can be copied easily to other hosts.

./configure –enable-agent –prefix=/usr/local/zabbix –enable-static

make install

Add the zabbix server and agent ports to your /etc/services file.

echo ‘zabbix_agent 10050/tcp’ >> /etc/services

echo ‘zabbix_trap 10051/tcp’ >> /etc/services

Copy the sample configs to /etc/zabbix for server and agentd.

mkdir /etc/zabbix

cp misc/conf/zabbix_agentd.conf /etc/zabbix

cp misc/conf/zabbix_server.conf /etc/zabbix

in /etc/zabbix/zabbix_server.conf, modify:





in /etc/zabbix/zabbix_agentd.conf, modify:



cp misc/init.d/redhat/zabbix_agentd_ctl /etc/init.d/zabbix_agentd
cp misc/init.d/redhat/zabbix_server_ctl /etc/init.d/zabbix_server

in /etc/init.d/zabbix_agentd AND /etc/init.d/zabbix_server:


in /etc/init.d/zabbix_agentd (Note the # hash marks, they are necessary), add near the top, just below #!/bin/sh:

# chkconfig: 345 95 95
# description: Zabbix Agentd

in /etc/init.d/zabbix_server (again, note the # Hash marks, they are required), add near the top, just below #!/bin/sh:

# chkconfig: 345 95 95
# description: Zabbix Server

Configure automatic starting and stopping of services.

chkconfig –level 345 zabbix_server on

chkconfig –level 345 zabbix_agentd on

chkconfig –level 345 httpd on

chkconfig –level 345 mysqld on

chkconfig –level 0123456 iptables off

/etc/init.d/iptables stop

Note: I turn the iptables firewall OFF because my box is behind a firewall. You should consult with your network folks before turning off the firewall. At the very least you should poke holes for port 80, 10050, and 10051 in the firewall.

cp -r frontends/php /var/www/html/zabbix

in /etc/php.ini, modify:

max_execution_time = 300

date.timezone = America/Los_Angeles

Note: Obviously you should substitute your own time zone. For a list of all valid timezones, click here

/etc/init.d/httpd start

chmod 777 /var/www/html/zabbix/conf

Launch inyour browser. You should be prompted with a setup screen. Click through the user agreement, and when you get to the Pre requisites screen, make sure you have a green OK next to everything.

Zabbix pre req’s

Zabbix DB config

When you’ve finished walking through the web interface setup:

chmod 755 /var/www/html/zabbix/conf

mv /var/www/html/zabbix/setup.php /var/www/html/zabbix/setup.php.bak

/etc/init.d/zabbix_agentd start

/etc/init.d/zabbix_server start

You can now login to your zabbix installation by going to, using the username “admin”, with no password. To monitor your zabbix server, you can go to the Configuration Tab, and choose the “hosts” sub Tab. Select the “Zabbix Server” host, by putting a checkmark next to it. and choose the “Activate Selected” button below. Wait a minute or two, then select the “Monitoring” tab, and then the “latest data” sub tab. You should start seeing performance stats appear!

For Reference, your binaries are under /usr/local/zabbix/bin, and your configuration files are in /etc/zabbix.

I’m not a big fan of their default template, I think the naming sucks. Look for a future article talking about renaming zabbix items. But this should be enough to get you started! :) You can find the answers to most of your questions in the Zabbix manual, available here: . You can also find lots of answers in the zabbix forums .